00:00:00.001 Started by upstream project "autotest-per-patch" build number 124203 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.075 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.128 > git --version # 'git version 2.39.2' 00:00:00.128 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.106 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.116 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.128 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:07.128 > git config core.sparsecheckout # timeout=10 00:00:07.138 > git read-tree -mu HEAD # timeout=10 00:00:07.154 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:07.173 Commit message: "pool: fixes for VisualBuild class" 00:00:07.174 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:07.272 [Pipeline] Start of Pipeline 00:00:07.287 [Pipeline] library 00:00:07.289 Loading library shm_lib@master 00:00:07.289 Library shm_lib@master is cached. Copying from home. 00:00:07.305 [Pipeline] node 00:00:07.318 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.319 [Pipeline] { 00:00:07.330 [Pipeline] catchError 00:00:07.332 [Pipeline] { 00:00:07.348 [Pipeline] wrap 00:00:07.358 [Pipeline] { 00:00:07.365 [Pipeline] stage 00:00:07.366 [Pipeline] { (Prologue) 00:00:07.537 [Pipeline] sh 00:00:07.822 + logger -p user.info -t JENKINS-CI 00:00:07.841 [Pipeline] echo 00:00:07.843 Node: CYP11 00:00:07.851 [Pipeline] sh 00:00:08.152 [Pipeline] setCustomBuildProperty 00:00:08.162 [Pipeline] echo 00:00:08.163 Cleanup processes 00:00:08.166 [Pipeline] sh 00:00:08.458 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.458 293758 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.475 [Pipeline] sh 00:00:08.771 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.771 ++ grep -v 'sudo pgrep' 00:00:08.771 ++ awk '{print $1}' 00:00:08.771 + sudo kill -9 00:00:08.771 + true 00:00:08.814 [Pipeline] cleanWs 00:00:08.823 [WS-CLEANUP] Deleting project workspace... 00:00:08.823 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.831 [WS-CLEANUP] done 00:00:08.833 [Pipeline] setCustomBuildProperty 00:00:08.843 [Pipeline] sh 00:00:09.125 + sudo git config --global --replace-all safe.directory '*' 00:00:09.179 [Pipeline] nodesByLabel 00:00:09.180 Found a total of 2 nodes with the 'sorcerer' label 00:00:09.187 [Pipeline] httpRequest 00:00:09.191 HttpMethod: GET 00:00:09.192 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.196 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.200 Response Code: HTTP/1.1 200 OK 00:00:09.200 Success: Status code 200 is in the accepted range: 200,404 00:00:09.201 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.185 [Pipeline] sh 00:00:11.474 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.492 [Pipeline] httpRequest 00:00:11.498 HttpMethod: GET 00:00:11.498 URL: http://10.211.164.101/packages/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:11.499 Sending request to url: http://10.211.164.101/packages/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:11.517 Response Code: HTTP/1.1 200 OK 00:00:11.518 Success: Status code 200 is in the accepted range: 200,404 00:00:11.518 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:44.177 [Pipeline] sh 00:00:44.464 + tar --no-same-owner -xf spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:47.023 [Pipeline] sh 00:00:47.308 + git -C spdk log --oneline -n5 00:00:47.308 c5e2a446d autorun_post: Check if skipped tests were executed in per-patch 00:00:47.308 8b38652da test/fuzz: Rename llvm fuzzing tests 00:00:47.308 e55c9a812 vbdev_error: decrement error_num atomically 00:00:47.308 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:00:47.308 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:00:47.320 [Pipeline] } 00:00:47.332 [Pipeline] // stage 00:00:47.340 [Pipeline] stage 00:00:47.342 [Pipeline] { (Prepare) 00:00:47.355 [Pipeline] writeFile 00:00:47.365 [Pipeline] sh 00:00:47.646 + logger -p user.info -t JENKINS-CI 00:00:47.658 [Pipeline] sh 00:00:47.940 + logger -p user.info -t JENKINS-CI 00:00:47.952 [Pipeline] sh 00:00:48.237 + cat autorun-spdk.conf 00:00:48.237 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.237 SPDK_TEST_NVMF=1 00:00:48.237 SPDK_TEST_NVME_CLI=1 00:00:48.237 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.237 SPDK_TEST_NVMF_NICS=e810 00:00:48.237 SPDK_TEST_VFIOUSER=1 00:00:48.237 SPDK_RUN_UBSAN=1 00:00:48.237 NET_TYPE=phy 00:00:48.248 RUN_NIGHTLY=0 00:00:48.253 [Pipeline] readFile 00:00:48.278 [Pipeline] withEnv 00:00:48.280 [Pipeline] { 00:00:48.295 [Pipeline] sh 00:00:48.584 + set -ex 00:00:48.584 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:48.584 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.584 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.584 ++ SPDK_TEST_NVMF=1 00:00:48.584 ++ SPDK_TEST_NVME_CLI=1 00:00:48.584 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.584 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.584 ++ SPDK_TEST_VFIOUSER=1 00:00:48.584 ++ SPDK_RUN_UBSAN=1 00:00:48.584 ++ NET_TYPE=phy 00:00:48.584 ++ RUN_NIGHTLY=0 00:00:48.584 + case $SPDK_TEST_NVMF_NICS in 00:00:48.584 + DRIVERS=ice 00:00:48.584 + [[ tcp == \r\d\m\a ]] 00:00:48.584 + [[ -n ice ]] 00:00:48.584 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:48.584 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:48.584 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:48.584 rmmod: ERROR: Module irdma is not currently loaded 00:00:48.584 rmmod: ERROR: Module i40iw is not currently loaded 00:00:48.584 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:48.584 + true 00:00:48.584 + for D in $DRIVERS 00:00:48.584 + sudo modprobe ice 00:00:48.584 + exit 0 00:00:48.594 [Pipeline] } 00:00:48.613 [Pipeline] // withEnv 00:00:48.618 [Pipeline] } 00:00:48.635 [Pipeline] // stage 00:00:48.644 [Pipeline] catchError 00:00:48.646 [Pipeline] { 00:00:48.662 [Pipeline] timeout 00:00:48.662 Timeout set to expire in 50 min 00:00:48.664 [Pipeline] { 00:00:48.680 [Pipeline] stage 00:00:48.682 [Pipeline] { (Tests) 00:00:48.700 [Pipeline] sh 00:00:49.021 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.022 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.022 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.022 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:49.022 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.022 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.022 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:49.022 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.022 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:49.022 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:49.022 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:49.022 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:49.022 + source /etc/os-release 00:00:49.022 ++ NAME='Fedora Linux' 00:00:49.022 ++ VERSION='38 (Cloud Edition)' 00:00:49.022 ++ ID=fedora 00:00:49.022 ++ VERSION_ID=38 00:00:49.022 ++ VERSION_CODENAME= 00:00:49.022 ++ PLATFORM_ID=platform:f38 00:00:49.022 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:49.022 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:49.022 ++ LOGO=fedora-logo-icon 00:00:49.022 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:49.022 ++ HOME_URL=https://fedoraproject.org/ 00:00:49.022 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:49.022 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:49.022 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:49.022 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:49.022 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:49.022 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:49.022 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:49.022 ++ SUPPORT_END=2024-05-14 00:00:49.022 ++ VARIANT='Cloud Edition' 00:00:49.022 ++ VARIANT_ID=cloud 00:00:49.022 + uname -a 00:00:49.022 Linux spdk-cyp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:49.022 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:52.326 Hugepages 00:00:52.326 node hugesize free / total 00:00:52.326 node0 1048576kB 0 / 0 00:00:52.326 node0 2048kB 0 / 0 00:00:52.326 node1 1048576kB 0 / 0 00:00:52.326 node1 2048kB 0 / 0 00:00:52.326 00:00:52.326 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:52.326 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:52.326 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:52.326 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:52.326 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:52.326 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:52.326 + rm -f /tmp/spdk-ld-path 00:00:52.326 + source autorun-spdk.conf 00:00:52.326 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.326 ++ SPDK_TEST_NVMF=1 00:00:52.326 ++ SPDK_TEST_NVME_CLI=1 00:00:52.326 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.326 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.326 ++ SPDK_TEST_VFIOUSER=1 00:00:52.326 ++ SPDK_RUN_UBSAN=1 00:00:52.326 ++ NET_TYPE=phy 00:00:52.326 ++ RUN_NIGHTLY=0 00:00:52.326 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:52.326 + [[ -n '' ]] 00:00:52.326 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.326 + for M in /var/spdk/build-*-manifest.txt 00:00:52.326 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:52.326 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.326 + for M in /var/spdk/build-*-manifest.txt 00:00:52.326 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:52.326 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:52.326 ++ uname 00:00:52.326 + [[ Linux == \L\i\n\u\x ]] 00:00:52.326 + sudo dmesg -T 00:00:52.326 + sudo dmesg --clear 00:00:52.587 + dmesg_pid=294852 00:00:52.587 + [[ Fedora Linux == FreeBSD ]] 00:00:52.587 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.587 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.587 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:52.587 + [[ -x /usr/src/fio-static/fio ]] 00:00:52.587 + export FIO_BIN=/usr/src/fio-static/fio 00:00:52.587 + FIO_BIN=/usr/src/fio-static/fio 00:00:52.587 + sudo dmesg -Tw 00:00:52.587 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:52.587 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:52.587 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:52.587 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.587 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.587 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:52.587 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.587 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.587 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.587 Test configuration: 00:00:52.587 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.587 SPDK_TEST_NVMF=1 00:00:52.587 SPDK_TEST_NVME_CLI=1 00:00:52.587 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.587 SPDK_TEST_NVMF_NICS=e810 00:00:52.587 SPDK_TEST_VFIOUSER=1 00:00:52.587 SPDK_RUN_UBSAN=1 00:00:52.587 NET_TYPE=phy 00:00:52.587 RUN_NIGHTLY=0 12:04:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:52.587 12:04:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:52.587 12:04:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:52.587 12:04:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:52.587 12:04:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.587 12:04:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.587 12:04:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.587 12:04:58 -- paths/export.sh@5 -- $ export PATH 00:00:52.587 12:04:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.587 12:04:58 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:52.587 12:04:58 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:52.588 12:04:58 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013898.XXXXXX 00:00:52.588 12:04:58 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013898.uMcAK1 00:00:52.588 12:04:58 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:52.588 12:04:58 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:52.588 12:04:58 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:52.588 12:04:58 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:52.588 12:04:58 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:52.588 12:04:58 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:52.588 12:04:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:52.588 12:04:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.588 12:04:58 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:52.588 12:04:58 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:52.588 12:04:58 -- pm/common@17 -- $ local monitor 00:00:52.588 12:04:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.588 12:04:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.588 12:04:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.588 12:04:58 -- pm/common@21 -- $ date +%s 00:00:52.588 12:04:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.588 12:04:58 -- pm/common@21 -- $ date +%s 00:00:52.588 12:04:58 -- pm/common@25 -- $ sleep 1 00:00:52.588 12:04:58 -- pm/common@21 -- $ date +%s 00:00:52.588 12:04:58 -- pm/common@21 -- $ date +%s 00:00:52.588 12:04:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013898 00:00:52.588 12:04:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013898 00:00:52.588 12:04:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013898 00:00:52.588 12:04:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013898 00:00:52.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013898_collect-vmstat.pm.log 00:00:52.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013898_collect-cpu-load.pm.log 00:00:52.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013898_collect-cpu-temp.pm.log 00:00:52.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013898_collect-bmc-pm.bmc.pm.log 00:00:53.531 12:04:59 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:53.531 12:04:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.531 12:04:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.531 12:04:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.531 12:04:59 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.531 Mon Jun 10 10:04:59 AM UTC 2024 00:00:53.531 12:04:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:53.531 v24.09-pre-55-gc5e2a446d 00:00:53.531 12:04:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:53.531 12:04:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:53.531 12:04:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:53.531 12:04:59 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:00:53.531 12:04:59 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:00:53.531 12:04:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.792 ************************************ 00:00:53.792 START TEST ubsan 00:00:53.792 ************************************ 00:00:53.792 12:04:59 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:00:53.792 using ubsan 00:00:53.792 00:00:53.792 real 0m0.000s 00:00:53.792 user 0m0.000s 00:00:53.792 sys 0m0.000s 00:00:53.792 12:04:59 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:00:53.792 12:04:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:53.792 ************************************ 00:00:53.792 END TEST ubsan 00:00:53.792 ************************************ 00:00:53.792 12:04:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:53.792 12:04:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:53.792 12:04:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:53.792 12:04:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:53.792 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:53.792 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:54.365 Using 'verbs' RDMA provider 00:01:10.221 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.449 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:22.449 Creating mk/config.mk...done. 00:01:22.449 Creating mk/cc.flags.mk...done. 00:01:22.449 Type 'make' to build. 00:01:22.449 12:05:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:22.449 12:05:27 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:22.449 12:05:27 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:22.449 12:05:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.449 ************************************ 00:01:22.449 START TEST make 00:01:22.449 ************************************ 00:01:22.449 12:05:27 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:22.449 make[1]: Nothing to be done for 'all'. 00:01:23.387 The Meson build system 00:01:23.388 Version: 1.3.1 00:01:23.388 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:23.388 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:23.388 Build type: native build 00:01:23.388 Project name: libvfio-user 00:01:23.388 Project version: 0.0.1 00:01:23.388 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.388 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.388 Host machine cpu family: x86_64 00:01:23.388 Host machine cpu: x86_64 00:01:23.388 Run-time dependency threads found: YES 00:01:23.388 Library dl found: YES 00:01:23.388 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.388 Run-time dependency json-c found: YES 0.17 00:01:23.388 Run-time dependency cmocka found: YES 1.1.7 00:01:23.388 Program pytest-3 found: NO 00:01:23.388 Program flake8 found: NO 00:01:23.388 Program misspell-fixer found: NO 00:01:23.388 Program restructuredtext-lint found: NO 00:01:23.388 Program valgrind found: YES (/usr/bin/valgrind) 00:01:23.388 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.388 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.388 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.388 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.388 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:23.388 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:23.388 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.388 Build targets in project: 8 00:01:23.388 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:23.388 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:23.388 00:01:23.388 libvfio-user 0.0.1 00:01:23.388 00:01:23.388 User defined options 00:01:23.388 buildtype : debug 00:01:23.388 default_library: shared 00:01:23.388 libdir : /usr/local/lib 00:01:23.388 00:01:23.388 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.646 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:23.646 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:23.905 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:23.905 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:23.905 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:23.905 [5/37] Compiling C object samples/null.p/null.c.o 00:01:23.905 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:23.905 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:23.905 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:23.905 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:23.905 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:23.905 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:23.905 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:23.905 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:23.905 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:23.905 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:23.905 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:23.905 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:23.905 [18/37] Compiling C object samples/server.p/server.c.o 00:01:23.905 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:23.905 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:23.905 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:23.905 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:23.905 [23/37] Compiling C object samples/client.p/client.c.o 00:01:23.905 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:23.905 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:23.905 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:23.905 [27/37] Linking target samples/client 00:01:23.905 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:23.905 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:23.905 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:23.905 [31/37] Linking target test/unit_tests 00:01:24.164 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:24.164 [33/37] Linking target samples/server 00:01:24.164 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:24.164 [35/37] Linking target samples/null 00:01:24.164 [36/37] Linking target samples/gpio-pci-idio-16 00:01:24.164 [37/37] Linking target samples/lspci 00:01:24.164 INFO: autodetecting backend as ninja 00:01:24.164 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.164 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.427 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.427 ninja: no work to do. 00:01:31.036 The Meson build system 00:01:31.036 Version: 1.3.1 00:01:31.036 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:31.036 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:31.036 Build type: native build 00:01:31.036 Program cat found: YES (/usr/bin/cat) 00:01:31.036 Project name: DPDK 00:01:31.036 Project version: 24.03.0 00:01:31.036 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:31.036 C linker for the host machine: cc ld.bfd 2.39-16 00:01:31.036 Host machine cpu family: x86_64 00:01:31.036 Host machine cpu: x86_64 00:01:31.036 Message: ## Building in Developer Mode ## 00:01:31.036 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.036 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:31.036 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.036 Program python3 found: YES (/usr/bin/python3) 00:01:31.036 Program cat found: YES (/usr/bin/cat) 00:01:31.036 Compiler for C supports arguments -march=native: YES 00:01:31.036 Checking for size of "void *" : 8 00:01:31.036 Checking for size of "void *" : 8 (cached) 00:01:31.036 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:31.036 Library m found: YES 00:01:31.036 Library numa found: YES 00:01:31.036 Has header "numaif.h" : YES 00:01:31.036 Library fdt found: NO 00:01:31.036 Library execinfo found: NO 00:01:31.036 Has header "execinfo.h" : YES 00:01:31.036 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.036 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.036 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.036 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.036 Run-time dependency openssl found: YES 3.0.9 00:01:31.036 Run-time dependency libpcap found: YES 1.10.4 00:01:31.036 Has header "pcap.h" with dependency libpcap: YES 00:01:31.036 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.036 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.036 Compiler for C supports arguments -Wformat: YES 00:01:31.036 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:31.036 Compiler for C supports arguments -Wformat-security: NO 00:01:31.036 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.036 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.036 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.036 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.036 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.036 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.036 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.036 Compiler for C supports arguments -Wundef: YES 00:01:31.036 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.036 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.036 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:31.036 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.036 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:31.036 Program objdump found: YES (/usr/bin/objdump) 00:01:31.036 Compiler for C supports arguments -mavx512f: YES 00:01:31.036 Checking if "AVX512 checking" compiles: YES 00:01:31.036 Fetching value of define "__SSE4_2__" : 1 00:01:31.036 Fetching value of define "__AES__" : 1 00:01:31.036 Fetching value of define "__AVX__" : 1 00:01:31.036 Fetching value of define "__AVX2__" : 1 00:01:31.036 Fetching value of define "__AVX512BW__" : 1 00:01:31.036 Fetching value of define "__AVX512CD__" : 1 00:01:31.036 Fetching value of define "__AVX512DQ__" : 1 00:01:31.036 Fetching value of define "__AVX512F__" : 1 00:01:31.036 Fetching value of define "__AVX512VL__" : 1 00:01:31.036 Fetching value of define "__PCLMUL__" : 1 00:01:31.036 Fetching value of define "__RDRND__" : 1 00:01:31.036 Fetching value of define "__RDSEED__" : 1 00:01:31.036 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:31.036 Fetching value of define "__znver1__" : (undefined) 00:01:31.036 Fetching value of define "__znver2__" : (undefined) 00:01:31.036 Fetching value of define "__znver3__" : (undefined) 00:01:31.036 Fetching value of define "__znver4__" : (undefined) 00:01:31.036 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:31.036 Message: lib/log: Defining dependency "log" 00:01:31.036 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.036 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.036 Checking for function "getentropy" : NO 00:01:31.036 Message: lib/eal: Defining dependency "eal" 00:01:31.036 Message: lib/ring: Defining dependency "ring" 00:01:31.036 Message: lib/rcu: Defining dependency "rcu" 00:01:31.036 Message: lib/mempool: Defining dependency "mempool" 00:01:31.036 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.036 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.036 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:31.036 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:31.036 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:31.036 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:31.036 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:31.036 Compiler for C supports arguments -mpclmul: YES 00:01:31.036 Compiler for C supports arguments -maes: YES 00:01:31.036 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.036 Compiler for C supports arguments -mavx512bw: YES 00:01:31.036 Compiler for C supports arguments -mavx512dq: YES 00:01:31.036 Compiler for C supports arguments -mavx512vl: YES 00:01:31.036 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.036 Compiler for C supports arguments -mavx2: YES 00:01:31.036 Compiler for C supports arguments -mavx: YES 00:01:31.036 Message: lib/net: Defining dependency "net" 00:01:31.036 Message: lib/meter: Defining dependency "meter" 00:01:31.036 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.036 Message: lib/pci: Defining dependency "pci" 00:01:31.036 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.036 Message: lib/hash: Defining dependency "hash" 00:01:31.036 Message: lib/timer: Defining dependency "timer" 00:01:31.036 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.036 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.036 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.036 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.036 Message: lib/power: Defining dependency "power" 00:01:31.036 Message: lib/reorder: Defining dependency "reorder" 00:01:31.036 Message: lib/security: Defining dependency "security" 00:01:31.036 Has header "linux/userfaultfd.h" : YES 00:01:31.036 Has header "linux/vduse.h" : YES 00:01:31.036 Message: lib/vhost: Defining dependency "vhost" 00:01:31.036 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.036 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.036 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.036 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.036 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:31.036 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:31.036 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:31.036 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:31.036 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:31.036 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:31.036 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.036 Configuring doxy-api-html.conf using configuration 00:01:31.036 Configuring doxy-api-man.conf using configuration 00:01:31.036 Program mandb found: YES (/usr/bin/mandb) 00:01:31.036 Program sphinx-build found: NO 00:01:31.036 Configuring rte_build_config.h using configuration 00:01:31.036 Message: 00:01:31.036 ================= 00:01:31.036 Applications Enabled 00:01:31.036 ================= 00:01:31.036 00:01:31.036 apps: 00:01:31.037 00:01:31.037 00:01:31.037 Message: 00:01:31.037 ================= 00:01:31.037 Libraries Enabled 00:01:31.037 ================= 00:01:31.037 00:01:31.037 libs: 00:01:31.037 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.037 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:31.037 cryptodev, dmadev, power, reorder, security, vhost, 00:01:31.037 00:01:31.037 Message: 00:01:31.037 =============== 00:01:31.037 Drivers Enabled 00:01:31.037 =============== 00:01:31.037 00:01:31.037 common: 00:01:31.037 00:01:31.037 bus: 00:01:31.037 pci, vdev, 00:01:31.037 mempool: 00:01:31.037 ring, 00:01:31.037 dma: 00:01:31.037 00:01:31.037 net: 00:01:31.037 00:01:31.037 crypto: 00:01:31.037 00:01:31.037 compress: 00:01:31.037 00:01:31.037 vdpa: 00:01:31.037 00:01:31.037 00:01:31.037 Message: 00:01:31.037 ================= 00:01:31.037 Content Skipped 00:01:31.037 ================= 00:01:31.037 00:01:31.037 apps: 00:01:31.037 dumpcap: explicitly disabled via build config 00:01:31.037 graph: explicitly disabled via build config 00:01:31.037 pdump: explicitly disabled via build config 00:01:31.037 proc-info: explicitly disabled via build config 00:01:31.037 test-acl: explicitly disabled via build config 00:01:31.037 test-bbdev: explicitly disabled via build config 00:01:31.037 test-cmdline: explicitly disabled via build config 00:01:31.037 test-compress-perf: explicitly disabled via build config 00:01:31.037 test-crypto-perf: explicitly disabled via build config 00:01:31.037 test-dma-perf: explicitly disabled via build config 00:01:31.037 test-eventdev: explicitly disabled via build config 00:01:31.037 test-fib: explicitly disabled via build config 00:01:31.037 test-flow-perf: explicitly disabled via build config 00:01:31.037 test-gpudev: explicitly disabled via build config 00:01:31.037 test-mldev: explicitly disabled via build config 00:01:31.037 test-pipeline: explicitly disabled via build config 00:01:31.037 test-pmd: explicitly disabled via build config 00:01:31.037 test-regex: explicitly disabled via build config 00:01:31.037 test-sad: explicitly disabled via build config 00:01:31.037 test-security-perf: explicitly disabled via build config 00:01:31.037 00:01:31.037 libs: 00:01:31.037 argparse: explicitly disabled via build config 00:01:31.037 metrics: explicitly disabled via build config 00:01:31.037 acl: explicitly disabled via build config 00:01:31.037 bbdev: explicitly disabled via build config 00:01:31.037 bitratestats: explicitly disabled via build config 00:01:31.037 bpf: explicitly disabled via build config 00:01:31.037 cfgfile: explicitly disabled via build config 00:01:31.037 distributor: explicitly disabled via build config 00:01:31.037 efd: explicitly disabled via build config 00:01:31.037 eventdev: explicitly disabled via build config 00:01:31.037 dispatcher: explicitly disabled via build config 00:01:31.037 gpudev: explicitly disabled via build config 00:01:31.037 gro: explicitly disabled via build config 00:01:31.037 gso: explicitly disabled via build config 00:01:31.037 ip_frag: explicitly disabled via build config 00:01:31.037 jobstats: explicitly disabled via build config 00:01:31.037 latencystats: explicitly disabled via build config 00:01:31.037 lpm: explicitly disabled via build config 00:01:31.037 member: explicitly disabled via build config 00:01:31.037 pcapng: explicitly disabled via build config 00:01:31.037 rawdev: explicitly disabled via build config 00:01:31.037 regexdev: explicitly disabled via build config 00:01:31.037 mldev: explicitly disabled via build config 00:01:31.037 rib: explicitly disabled via build config 00:01:31.037 sched: explicitly disabled via build config 00:01:31.037 stack: explicitly disabled via build config 00:01:31.037 ipsec: explicitly disabled via build config 00:01:31.037 pdcp: explicitly disabled via build config 00:01:31.037 fib: explicitly disabled via build config 00:01:31.037 port: explicitly disabled via build config 00:01:31.037 pdump: explicitly disabled via build config 00:01:31.037 table: explicitly disabled via build config 00:01:31.037 pipeline: explicitly disabled via build config 00:01:31.037 graph: explicitly disabled via build config 00:01:31.037 node: explicitly disabled via build config 00:01:31.037 00:01:31.037 drivers: 00:01:31.037 common/cpt: not in enabled drivers build config 00:01:31.037 common/dpaax: not in enabled drivers build config 00:01:31.037 common/iavf: not in enabled drivers build config 00:01:31.037 common/idpf: not in enabled drivers build config 00:01:31.037 common/ionic: not in enabled drivers build config 00:01:31.037 common/mvep: not in enabled drivers build config 00:01:31.037 common/octeontx: not in enabled drivers build config 00:01:31.037 bus/auxiliary: not in enabled drivers build config 00:01:31.037 bus/cdx: not in enabled drivers build config 00:01:31.037 bus/dpaa: not in enabled drivers build config 00:01:31.037 bus/fslmc: not in enabled drivers build config 00:01:31.037 bus/ifpga: not in enabled drivers build config 00:01:31.037 bus/platform: not in enabled drivers build config 00:01:31.037 bus/uacce: not in enabled drivers build config 00:01:31.037 bus/vmbus: not in enabled drivers build config 00:01:31.037 common/cnxk: not in enabled drivers build config 00:01:31.037 common/mlx5: not in enabled drivers build config 00:01:31.037 common/nfp: not in enabled drivers build config 00:01:31.037 common/nitrox: not in enabled drivers build config 00:01:31.037 common/qat: not in enabled drivers build config 00:01:31.037 common/sfc_efx: not in enabled drivers build config 00:01:31.037 mempool/bucket: not in enabled drivers build config 00:01:31.037 mempool/cnxk: not in enabled drivers build config 00:01:31.037 mempool/dpaa: not in enabled drivers build config 00:01:31.037 mempool/dpaa2: not in enabled drivers build config 00:01:31.037 mempool/octeontx: not in enabled drivers build config 00:01:31.037 mempool/stack: not in enabled drivers build config 00:01:31.037 dma/cnxk: not in enabled drivers build config 00:01:31.037 dma/dpaa: not in enabled drivers build config 00:01:31.037 dma/dpaa2: not in enabled drivers build config 00:01:31.037 dma/hisilicon: not in enabled drivers build config 00:01:31.037 dma/idxd: not in enabled drivers build config 00:01:31.037 dma/ioat: not in enabled drivers build config 00:01:31.037 dma/skeleton: not in enabled drivers build config 00:01:31.037 net/af_packet: not in enabled drivers build config 00:01:31.037 net/af_xdp: not in enabled drivers build config 00:01:31.037 net/ark: not in enabled drivers build config 00:01:31.037 net/atlantic: not in enabled drivers build config 00:01:31.037 net/avp: not in enabled drivers build config 00:01:31.037 net/axgbe: not in enabled drivers build config 00:01:31.037 net/bnx2x: not in enabled drivers build config 00:01:31.037 net/bnxt: not in enabled drivers build config 00:01:31.037 net/bonding: not in enabled drivers build config 00:01:31.037 net/cnxk: not in enabled drivers build config 00:01:31.037 net/cpfl: not in enabled drivers build config 00:01:31.037 net/cxgbe: not in enabled drivers build config 00:01:31.037 net/dpaa: not in enabled drivers build config 00:01:31.037 net/dpaa2: not in enabled drivers build config 00:01:31.037 net/e1000: not in enabled drivers build config 00:01:31.037 net/ena: not in enabled drivers build config 00:01:31.037 net/enetc: not in enabled drivers build config 00:01:31.037 net/enetfec: not in enabled drivers build config 00:01:31.037 net/enic: not in enabled drivers build config 00:01:31.037 net/failsafe: not in enabled drivers build config 00:01:31.037 net/fm10k: not in enabled drivers build config 00:01:31.037 net/gve: not in enabled drivers build config 00:01:31.037 net/hinic: not in enabled drivers build config 00:01:31.037 net/hns3: not in enabled drivers build config 00:01:31.037 net/i40e: not in enabled drivers build config 00:01:31.037 net/iavf: not in enabled drivers build config 00:01:31.037 net/ice: not in enabled drivers build config 00:01:31.037 net/idpf: not in enabled drivers build config 00:01:31.037 net/igc: not in enabled drivers build config 00:01:31.037 net/ionic: not in enabled drivers build config 00:01:31.037 net/ipn3ke: not in enabled drivers build config 00:01:31.037 net/ixgbe: not in enabled drivers build config 00:01:31.037 net/mana: not in enabled drivers build config 00:01:31.037 net/memif: not in enabled drivers build config 00:01:31.037 net/mlx4: not in enabled drivers build config 00:01:31.037 net/mlx5: not in enabled drivers build config 00:01:31.037 net/mvneta: not in enabled drivers build config 00:01:31.037 net/mvpp2: not in enabled drivers build config 00:01:31.037 net/netvsc: not in enabled drivers build config 00:01:31.037 net/nfb: not in enabled drivers build config 00:01:31.037 net/nfp: not in enabled drivers build config 00:01:31.037 net/ngbe: not in enabled drivers build config 00:01:31.037 net/null: not in enabled drivers build config 00:01:31.037 net/octeontx: not in enabled drivers build config 00:01:31.037 net/octeon_ep: not in enabled drivers build config 00:01:31.037 net/pcap: not in enabled drivers build config 00:01:31.037 net/pfe: not in enabled drivers build config 00:01:31.037 net/qede: not in enabled drivers build config 00:01:31.037 net/ring: not in enabled drivers build config 00:01:31.037 net/sfc: not in enabled drivers build config 00:01:31.037 net/softnic: not in enabled drivers build config 00:01:31.037 net/tap: not in enabled drivers build config 00:01:31.037 net/thunderx: not in enabled drivers build config 00:01:31.037 net/txgbe: not in enabled drivers build config 00:01:31.037 net/vdev_netvsc: not in enabled drivers build config 00:01:31.037 net/vhost: not in enabled drivers build config 00:01:31.037 net/virtio: not in enabled drivers build config 00:01:31.037 net/vmxnet3: not in enabled drivers build config 00:01:31.037 raw/*: missing internal dependency, "rawdev" 00:01:31.037 crypto/armv8: not in enabled drivers build config 00:01:31.037 crypto/bcmfs: not in enabled drivers build config 00:01:31.037 crypto/caam_jr: not in enabled drivers build config 00:01:31.037 crypto/ccp: not in enabled drivers build config 00:01:31.037 crypto/cnxk: not in enabled drivers build config 00:01:31.037 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.037 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.037 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.037 crypto/mlx5: not in enabled drivers build config 00:01:31.037 crypto/mvsam: not in enabled drivers build config 00:01:31.037 crypto/nitrox: not in enabled drivers build config 00:01:31.037 crypto/null: not in enabled drivers build config 00:01:31.038 crypto/octeontx: not in enabled drivers build config 00:01:31.038 crypto/openssl: not in enabled drivers build config 00:01:31.038 crypto/scheduler: not in enabled drivers build config 00:01:31.038 crypto/uadk: not in enabled drivers build config 00:01:31.038 crypto/virtio: not in enabled drivers build config 00:01:31.038 compress/isal: not in enabled drivers build config 00:01:31.038 compress/mlx5: not in enabled drivers build config 00:01:31.038 compress/nitrox: not in enabled drivers build config 00:01:31.038 compress/octeontx: not in enabled drivers build config 00:01:31.038 compress/zlib: not in enabled drivers build config 00:01:31.038 regex/*: missing internal dependency, "regexdev" 00:01:31.038 ml/*: missing internal dependency, "mldev" 00:01:31.038 vdpa/ifc: not in enabled drivers build config 00:01:31.038 vdpa/mlx5: not in enabled drivers build config 00:01:31.038 vdpa/nfp: not in enabled drivers build config 00:01:31.038 vdpa/sfc: not in enabled drivers build config 00:01:31.038 event/*: missing internal dependency, "eventdev" 00:01:31.038 baseband/*: missing internal dependency, "bbdev" 00:01:31.038 gpu/*: missing internal dependency, "gpudev" 00:01:31.038 00:01:31.038 00:01:31.038 Build targets in project: 84 00:01:31.038 00:01:31.038 DPDK 24.03.0 00:01:31.038 00:01:31.038 User defined options 00:01:31.038 buildtype : debug 00:01:31.038 default_library : shared 00:01:31.038 libdir : lib 00:01:31.038 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.038 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:31.038 c_link_args : 00:01:31.038 cpu_instruction_set: native 00:01:31.038 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:31.038 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:31.038 enable_docs : false 00:01:31.038 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:31.038 enable_kmods : false 00:01:31.038 tests : false 00:01:31.038 00:01:31.038 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.038 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.038 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.038 [2/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.038 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.038 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.038 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.038 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.038 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.038 [8/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.038 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.038 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.038 [11/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.296 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.296 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.296 [14/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.296 [15/267] Linking static target lib/librte_kvargs.a 00:01:31.296 [16/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.296 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.296 [18/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.296 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.296 [20/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.296 [21/267] Linking static target lib/librte_pci.a 00:01:31.296 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.296 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.296 [24/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.296 [25/267] Linking static target lib/librte_log.a 00:01:31.296 [26/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.296 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.296 [28/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.296 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.296 [30/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.296 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.296 [32/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.296 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.296 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.296 [35/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:31.554 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.555 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.555 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.555 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.555 [40/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.555 [41/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.555 [42/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.555 [43/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.555 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.555 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.555 [46/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.555 [47/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.555 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.555 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.555 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.555 [51/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.555 [52/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.555 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.555 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.555 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.555 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.555 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.555 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.555 [59/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.555 [60/267] Linking static target lib/librte_telemetry.a 00:01:31.555 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.555 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.555 [63/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.555 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.555 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.555 [66/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.555 [67/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.555 [68/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.555 [69/267] Linking static target lib/librte_meter.a 00:01:31.555 [70/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.555 [71/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.555 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.555 [73/267] Linking static target lib/librte_ring.a 00:01:31.555 [74/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.555 [75/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.555 [76/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.555 [77/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.555 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.555 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.555 [80/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.555 [81/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.555 [82/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.555 [83/267] Linking static target lib/librte_cmdline.a 00:01:31.555 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.555 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.555 [86/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.555 [87/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.555 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.555 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.555 [90/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.555 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.815 [92/267] Linking static target lib/librte_rcu.a 00:01:31.815 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.815 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.815 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.815 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.815 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.815 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.815 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.815 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.815 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.815 [102/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.815 [103/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.815 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.815 [105/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.815 [106/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.815 [107/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.815 [108/267] Linking static target lib/librte_dmadev.a 00:01:31.815 [109/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.815 [110/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.815 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.815 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.815 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.815 [114/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.815 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.815 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:31.815 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.815 [118/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.815 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.815 [120/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.815 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.815 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.815 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.815 [124/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.815 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.815 [126/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.815 [127/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.815 [128/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:31.815 [129/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.815 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.815 [131/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.815 [132/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.815 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.815 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.815 [135/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.815 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.815 [137/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.815 [138/267] Linking static target lib/librte_security.a 00:01:31.815 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.815 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.815 [141/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.815 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.815 [143/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.815 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.815 [145/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.815 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.815 [147/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.815 [148/267] Linking static target lib/librte_mempool.a 00:01:31.815 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.815 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.815 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.815 [152/267] Linking static target lib/librte_timer.a 00:01:31.815 [153/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.815 [154/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.815 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.815 [156/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.816 [157/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.816 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.816 [159/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.076 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.076 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.076 [162/267] Linking target lib/librte_log.so.24.1 00:01:32.076 [163/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.076 [164/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.076 [165/267] Linking static target lib/librte_hash.a 00:01:32.076 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.076 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.076 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.076 [169/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.076 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.076 [171/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.076 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.076 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:32.076 [174/267] Linking static target lib/librte_power.a 00:01:32.076 [175/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.076 [176/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.076 [177/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.076 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.076 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.076 [180/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.076 [181/267] Linking static target lib/librte_net.a 00:01:32.076 [182/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.076 [183/267] Linking static target lib/librte_reorder.a 00:01:32.076 [184/267] Linking static target lib/librte_compressdev.a 00:01:32.076 [185/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.077 [186/267] Linking static target lib/librte_eal.a 00:01:32.077 [187/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.077 [188/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.077 [189/267] Linking static target drivers/librte_mempool_ring.a 00:01:32.077 [190/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:32.077 [191/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.077 [192/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.077 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.077 [194/267] Linking static target lib/librte_cryptodev.a 00:01:32.077 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.077 [196/267] Linking target lib/librte_kvargs.so.24.1 00:01:32.077 [197/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.077 [198/267] Linking target lib/librte_telemetry.so.24.1 00:01:32.077 [199/267] Linking static target lib/librte_mbuf.a 00:01:32.077 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.077 [201/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.077 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.077 [203/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.077 [204/267] Linking static target drivers/librte_bus_pci.a 00:01:32.077 [205/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.077 [206/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.077 [207/267] Linking static target drivers/librte_bus_vdev.a 00:01:32.337 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.337 [209/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.337 [210/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.337 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.337 [212/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.337 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.337 [214/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.598 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.598 [216/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.598 [217/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.598 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.598 [219/267] Linking static target lib/librte_ethdev.a 00:01:32.859 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.859 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.859 [222/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.859 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.859 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.121 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.121 [226/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.695 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:33.695 [228/267] Linking static target lib/librte_vhost.a 00:01:34.268 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.180 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.780 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.723 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.723 [233/267] Linking target lib/librte_eal.so.24.1 00:01:43.723 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.984 [235/267] Linking target lib/librte_pci.so.24.1 00:01:43.984 [236/267] Linking target lib/librte_ring.so.24.1 00:01:43.984 [237/267] Linking target lib/librte_timer.so.24.1 00:01:43.984 [238/267] Linking target lib/librte_meter.so.24.1 00:01:43.984 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.984 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:43.984 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.984 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.984 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.984 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.984 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:43.984 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:43.984 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:43.984 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:44.244 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:44.244 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:44.244 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:44.244 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:44.244 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:44.504 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:44.504 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:44.504 [256/267] Linking target lib/librte_net.so.24.1 00:01:44.504 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:44.504 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.504 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.504 [260/267] Linking target lib/librte_hash.so.24.1 00:01:44.504 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:44.504 [262/267] Linking target lib/librte_security.so.24.1 00:01:44.504 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:44.764 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.764 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:44.764 [266/267] Linking target lib/librte_power.so.24.1 00:01:44.764 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:44.764 INFO: autodetecting backend as ninja 00:01:44.764 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:46.149 CC lib/ut_mock/mock.o 00:01:46.149 CC lib/log/log.o 00:01:46.149 CC lib/log/log_flags.o 00:01:46.149 CC lib/log/log_deprecated.o 00:01:46.149 CC lib/ut/ut.o 00:01:46.149 LIB libspdk_log.a 00:01:46.149 LIB libspdk_ut_mock.a 00:01:46.149 LIB libspdk_ut.a 00:01:46.149 SO libspdk_log.so.7.0 00:01:46.149 SO libspdk_ut_mock.so.6.0 00:01:46.149 SO libspdk_ut.so.2.0 00:01:46.149 SYMLINK libspdk_log.so 00:01:46.149 SYMLINK libspdk_ut_mock.so 00:01:46.149 SYMLINK libspdk_ut.so 00:01:46.408 CC lib/dma/dma.o 00:01:46.408 CC lib/util/base64.o 00:01:46.408 CC lib/util/bit_array.o 00:01:46.670 CC lib/util/cpuset.o 00:01:46.670 CC lib/util/crc16.o 00:01:46.670 CC lib/util/crc32.o 00:01:46.670 CC lib/util/crc32c.o 00:01:46.670 CC lib/ioat/ioat.o 00:01:46.670 CC lib/util/crc32_ieee.o 00:01:46.670 CC lib/util/crc64.o 00:01:46.670 CC lib/util/dif.o 00:01:46.670 CC lib/util/fd.o 00:01:46.670 CC lib/util/file.o 00:01:46.670 CC lib/util/hexlify.o 00:01:46.670 CC lib/util/iov.o 00:01:46.670 CC lib/util/math.o 00:01:46.670 CC lib/util/pipe.o 00:01:46.670 CC lib/util/strerror_tls.o 00:01:46.670 CC lib/util/uuid.o 00:01:46.670 CC lib/util/string.o 00:01:46.670 CC lib/util/fd_group.o 00:01:46.670 CXX lib/trace_parser/trace.o 00:01:46.670 CC lib/util/xor.o 00:01:46.670 CC lib/util/zipf.o 00:01:46.670 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.670 CC lib/vfio_user/host/vfio_user.o 00:01:46.670 LIB libspdk_dma.a 00:01:46.670 SO libspdk_dma.so.4.0 00:01:46.931 LIB libspdk_ioat.a 00:01:46.931 SYMLINK libspdk_dma.so 00:01:46.931 SO libspdk_ioat.so.7.0 00:01:46.931 SYMLINK libspdk_ioat.so 00:01:46.931 LIB libspdk_vfio_user.a 00:01:46.931 SO libspdk_vfio_user.so.5.0 00:01:46.931 LIB libspdk_util.a 00:01:47.191 SYMLINK libspdk_vfio_user.so 00:01:47.191 SO libspdk_util.so.9.0 00:01:47.191 SYMLINK libspdk_util.so 00:01:47.452 LIB libspdk_trace_parser.a 00:01:47.452 SO libspdk_trace_parser.so.5.0 00:01:47.452 CC lib/conf/conf.o 00:01:47.452 CC lib/json/json_util.o 00:01:47.452 CC lib/json/json_parse.o 00:01:47.452 CC lib/json/json_write.o 00:01:47.452 SYMLINK libspdk_trace_parser.so 00:01:47.452 CC lib/idxd/idxd.o 00:01:47.452 CC lib/idxd/idxd_user.o 00:01:47.452 CC lib/idxd/idxd_kernel.o 00:01:47.452 CC lib/rdma/common.o 00:01:47.452 CC lib/vmd/vmd.o 00:01:47.452 CC lib/rdma/rdma_verbs.o 00:01:47.452 CC lib/vmd/led.o 00:01:47.452 CC lib/env_dpdk/env.o 00:01:47.452 CC lib/env_dpdk/memory.o 00:01:47.452 CC lib/env_dpdk/pci.o 00:01:47.452 CC lib/env_dpdk/init.o 00:01:47.452 CC lib/env_dpdk/threads.o 00:01:47.452 CC lib/env_dpdk/pci_vmd.o 00:01:47.452 CC lib/env_dpdk/pci_ioat.o 00:01:47.452 CC lib/env_dpdk/pci_virtio.o 00:01:47.452 CC lib/env_dpdk/pci_idxd.o 00:01:47.452 CC lib/env_dpdk/pci_event.o 00:01:47.452 CC lib/env_dpdk/sigbus_handler.o 00:01:47.452 CC lib/env_dpdk/pci_dpdk.o 00:01:47.452 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.452 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.714 LIB libspdk_conf.a 00:01:47.714 SO libspdk_conf.so.6.0 00:01:47.714 LIB libspdk_json.a 00:01:47.714 LIB libspdk_rdma.a 00:01:47.714 SO libspdk_json.so.6.0 00:01:47.714 SO libspdk_rdma.so.6.0 00:01:47.975 SYMLINK libspdk_conf.so 00:01:47.975 SYMLINK libspdk_json.so 00:01:47.975 SYMLINK libspdk_rdma.so 00:01:47.975 LIB libspdk_idxd.a 00:01:47.975 SO libspdk_idxd.so.12.0 00:01:47.975 LIB libspdk_vmd.a 00:01:48.236 SO libspdk_vmd.so.6.0 00:01:48.236 SYMLINK libspdk_idxd.so 00:01:48.236 SYMLINK libspdk_vmd.so 00:01:48.236 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.236 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.236 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.236 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.527 LIB libspdk_jsonrpc.a 00:01:48.527 SO libspdk_jsonrpc.so.6.0 00:01:48.527 SYMLINK libspdk_jsonrpc.so 00:01:48.790 LIB libspdk_env_dpdk.a 00:01:48.790 SO libspdk_env_dpdk.so.14.1 00:01:49.050 CC lib/rpc/rpc.o 00:01:49.050 SYMLINK libspdk_env_dpdk.so 00:01:49.050 LIB libspdk_rpc.a 00:01:49.311 SO libspdk_rpc.so.6.0 00:01:49.311 SYMLINK libspdk_rpc.so 00:01:49.572 CC lib/keyring/keyring.o 00:01:49.572 CC lib/keyring/keyring_rpc.o 00:01:49.572 CC lib/trace/trace.o 00:01:49.572 CC lib/trace/trace_rpc.o 00:01:49.572 CC lib/trace/trace_flags.o 00:01:49.572 CC lib/notify/notify.o 00:01:49.572 CC lib/notify/notify_rpc.o 00:01:49.833 LIB libspdk_notify.a 00:01:49.833 LIB libspdk_keyring.a 00:01:49.833 SO libspdk_notify.so.6.0 00:01:49.833 SO libspdk_keyring.so.1.0 00:01:49.833 LIB libspdk_trace.a 00:01:49.833 SYMLINK libspdk_notify.so 00:01:49.833 SO libspdk_trace.so.10.0 00:01:49.833 SYMLINK libspdk_keyring.so 00:01:50.094 SYMLINK libspdk_trace.so 00:01:50.354 CC lib/sock/sock.o 00:01:50.354 CC lib/thread/thread.o 00:01:50.354 CC lib/sock/sock_rpc.o 00:01:50.354 CC lib/thread/iobuf.o 00:01:50.615 LIB libspdk_sock.a 00:01:50.615 SO libspdk_sock.so.9.0 00:01:50.875 SYMLINK libspdk_sock.so 00:01:51.136 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:51.136 CC lib/nvme/nvme_ctrlr.o 00:01:51.136 CC lib/nvme/nvme_fabric.o 00:01:51.136 CC lib/nvme/nvme_ns_cmd.o 00:01:51.136 CC lib/nvme/nvme_ns.o 00:01:51.136 CC lib/nvme/nvme_pcie_common.o 00:01:51.136 CC lib/nvme/nvme.o 00:01:51.136 CC lib/nvme/nvme_pcie.o 00:01:51.136 CC lib/nvme/nvme_qpair.o 00:01:51.136 CC lib/nvme/nvme_transport.o 00:01:51.136 CC lib/nvme/nvme_quirks.o 00:01:51.136 CC lib/nvme/nvme_discovery.o 00:01:51.136 CC lib/nvme/nvme_opal.o 00:01:51.137 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:51.137 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:51.137 CC lib/nvme/nvme_tcp.o 00:01:51.137 CC lib/nvme/nvme_io_msg.o 00:01:51.137 CC lib/nvme/nvme_poll_group.o 00:01:51.137 CC lib/nvme/nvme_zns.o 00:01:51.137 CC lib/nvme/nvme_stubs.o 00:01:51.137 CC lib/nvme/nvme_auth.o 00:01:51.137 CC lib/nvme/nvme_cuse.o 00:01:51.137 CC lib/nvme/nvme_vfio_user.o 00:01:51.137 CC lib/nvme/nvme_rdma.o 00:01:51.707 LIB libspdk_thread.a 00:01:51.707 SO libspdk_thread.so.10.0 00:01:51.707 SYMLINK libspdk_thread.so 00:01:51.968 CC lib/virtio/virtio_vhost_user.o 00:01:51.968 CC lib/virtio/virtio.o 00:01:51.968 CC lib/virtio/virtio_pci.o 00:01:51.968 CC lib/virtio/virtio_vfio_user.o 00:01:51.968 CC lib/init/json_config.o 00:01:51.968 CC lib/init/subsystem.o 00:01:51.968 CC lib/init/subsystem_rpc.o 00:01:51.968 CC lib/init/rpc.o 00:01:51.968 CC lib/blob/blobstore.o 00:01:51.968 CC lib/blob/request.o 00:01:51.968 CC lib/blob/zeroes.o 00:01:51.968 CC lib/blob/blob_bs_dev.o 00:01:51.968 CC lib/accel/accel.o 00:01:51.968 CC lib/accel/accel_rpc.o 00:01:51.968 CC lib/accel/accel_sw.o 00:01:51.968 CC lib/vfu_tgt/tgt_endpoint.o 00:01:51.968 CC lib/vfu_tgt/tgt_rpc.o 00:01:52.229 LIB libspdk_init.a 00:01:52.229 SO libspdk_init.so.5.0 00:01:52.229 LIB libspdk_virtio.a 00:01:52.229 LIB libspdk_vfu_tgt.a 00:01:52.229 SO libspdk_virtio.so.7.0 00:01:52.488 SYMLINK libspdk_init.so 00:01:52.488 SO libspdk_vfu_tgt.so.3.0 00:01:52.488 SYMLINK libspdk_virtio.so 00:01:52.488 SYMLINK libspdk_vfu_tgt.so 00:01:52.748 CC lib/event/app.o 00:01:52.748 CC lib/event/reactor.o 00:01:52.748 CC lib/event/log_rpc.o 00:01:52.748 CC lib/event/app_rpc.o 00:01:52.748 CC lib/event/scheduler_static.o 00:01:53.010 LIB libspdk_accel.a 00:01:53.010 LIB libspdk_nvme.a 00:01:53.010 SO libspdk_accel.so.15.0 00:01:53.010 SYMLINK libspdk_accel.so 00:01:53.010 SO libspdk_nvme.so.13.0 00:01:53.010 LIB libspdk_event.a 00:01:53.271 SO libspdk_event.so.13.1 00:01:53.271 SYMLINK libspdk_event.so 00:01:53.271 CC lib/bdev/bdev.o 00:01:53.271 CC lib/bdev/bdev_rpc.o 00:01:53.271 CC lib/bdev/bdev_zone.o 00:01:53.271 CC lib/bdev/part.o 00:01:53.271 CC lib/bdev/scsi_nvme.o 00:01:53.271 SYMLINK libspdk_nvme.so 00:01:54.657 LIB libspdk_blob.a 00:01:54.657 SO libspdk_blob.so.11.0 00:01:54.657 SYMLINK libspdk_blob.so 00:01:54.919 CC lib/blobfs/blobfs.o 00:01:54.919 CC lib/blobfs/tree.o 00:01:54.919 CC lib/lvol/lvol.o 00:01:55.490 LIB libspdk_bdev.a 00:01:55.750 SO libspdk_bdev.so.15.0 00:01:55.750 LIB libspdk_blobfs.a 00:01:55.750 SO libspdk_blobfs.so.10.0 00:01:55.750 SYMLINK libspdk_bdev.so 00:01:55.750 SYMLINK libspdk_blobfs.so 00:01:55.750 LIB libspdk_lvol.a 00:01:55.750 SO libspdk_lvol.so.10.0 00:01:56.011 SYMLINK libspdk_lvol.so 00:01:56.011 CC lib/nbd/nbd.o 00:01:56.011 CC lib/nbd/nbd_rpc.o 00:01:56.011 CC lib/nvmf/ctrlr.o 00:01:56.011 CC lib/nvmf/ctrlr_discovery.o 00:01:56.011 CC lib/nvmf/ctrlr_bdev.o 00:01:56.011 CC lib/scsi/dev.o 00:01:56.012 CC lib/nvmf/subsystem.o 00:01:56.012 CC lib/nvmf/nvmf.o 00:01:56.012 CC lib/scsi/lun.o 00:01:56.012 CC lib/nvmf/nvmf_rpc.o 00:01:56.012 CC lib/nvmf/transport.o 00:01:56.012 CC lib/scsi/port.o 00:01:56.012 CC lib/ublk/ublk.o 00:01:56.012 CC lib/scsi/scsi.o 00:01:56.012 CC lib/nvmf/tcp.o 00:01:56.012 CC lib/ublk/ublk_rpc.o 00:01:56.012 CC lib/nvmf/stubs.o 00:01:56.012 CC lib/scsi/scsi_bdev.o 00:01:56.012 CC lib/nvmf/mdns_server.o 00:01:56.012 CC lib/scsi/scsi_pr.o 00:01:56.012 CC lib/nvmf/vfio_user.o 00:01:56.012 CC lib/scsi/scsi_rpc.o 00:01:56.012 CC lib/scsi/task.o 00:01:56.012 CC lib/nvmf/rdma.o 00:01:56.012 CC lib/nvmf/auth.o 00:01:56.012 CC lib/ftl/ftl_core.o 00:01:56.012 CC lib/ftl/ftl_init.o 00:01:56.012 CC lib/ftl/ftl_layout.o 00:01:56.012 CC lib/ftl/ftl_debug.o 00:01:56.012 CC lib/ftl/ftl_io.o 00:01:56.012 CC lib/ftl/ftl_sb.o 00:01:56.012 CC lib/ftl/ftl_l2p.o 00:01:56.012 CC lib/ftl/ftl_l2p_flat.o 00:01:56.271 CC lib/ftl/ftl_nv_cache.o 00:01:56.271 CC lib/ftl/ftl_band.o 00:01:56.271 CC lib/ftl/ftl_band_ops.o 00:01:56.271 CC lib/ftl/ftl_writer.o 00:01:56.271 CC lib/ftl/ftl_rq.o 00:01:56.271 CC lib/ftl/ftl_reloc.o 00:01:56.271 CC lib/ftl/ftl_l2p_cache.o 00:01:56.271 CC lib/ftl/ftl_p2l.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.271 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.271 CC lib/ftl/utils/ftl_conf.o 00:01:56.271 CC lib/ftl/utils/ftl_md.o 00:01:56.271 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.271 CC lib/ftl/utils/ftl_mempool.o 00:01:56.271 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.271 CC lib/ftl/utils/ftl_property.o 00:01:56.271 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.271 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.271 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.271 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.271 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.271 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.271 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.271 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.271 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.271 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.271 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.271 CC lib/ftl/base/ftl_base_dev.o 00:01:56.271 CC lib/ftl/ftl_trace.o 00:01:56.558 LIB libspdk_nbd.a 00:01:56.558 SO libspdk_nbd.so.7.0 00:01:56.558 SYMLINK libspdk_nbd.so 00:01:56.818 LIB libspdk_ublk.a 00:01:56.818 LIB libspdk_scsi.a 00:01:56.818 SO libspdk_ublk.so.3.0 00:01:56.818 SO libspdk_scsi.so.9.0 00:01:56.818 SYMLINK libspdk_ublk.so 00:01:56.818 SYMLINK libspdk_scsi.so 00:01:57.079 LIB libspdk_ftl.a 00:01:57.340 SO libspdk_ftl.so.9.0 00:01:57.340 CC lib/vhost/vhost.o 00:01:57.340 CC lib/vhost/vhost_rpc.o 00:01:57.340 CC lib/vhost/vhost_scsi.o 00:01:57.340 CC lib/vhost/vhost_blk.o 00:01:57.340 CC lib/vhost/rte_vhost_user.o 00:01:57.340 CC lib/iscsi/conn.o 00:01:57.340 CC lib/iscsi/init_grp.o 00:01:57.340 CC lib/iscsi/md5.o 00:01:57.340 CC lib/iscsi/iscsi.o 00:01:57.340 CC lib/iscsi/param.o 00:01:57.340 CC lib/iscsi/tgt_node.o 00:01:57.340 CC lib/iscsi/portal_grp.o 00:01:57.340 CC lib/iscsi/iscsi_subsystem.o 00:01:57.340 CC lib/iscsi/iscsi_rpc.o 00:01:57.340 CC lib/iscsi/task.o 00:01:57.601 SYMLINK libspdk_ftl.so 00:01:57.863 LIB libspdk_nvmf.a 00:01:58.124 SO libspdk_nvmf.so.18.1 00:01:58.124 LIB libspdk_vhost.a 00:01:58.124 SYMLINK libspdk_nvmf.so 00:01:58.385 SO libspdk_vhost.so.8.0 00:01:58.385 SYMLINK libspdk_vhost.so 00:01:58.385 LIB libspdk_iscsi.a 00:01:58.647 SO libspdk_iscsi.so.8.0 00:01:58.647 SYMLINK libspdk_iscsi.so 00:01:59.281 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.281 CC module/vfu_device/vfu_virtio.o 00:01:59.281 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.281 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.281 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.281 CC module/keyring/file/keyring.o 00:01:59.281 LIB libspdk_env_dpdk_rpc.a 00:01:59.281 CC module/keyring/file/keyring_rpc.o 00:01:59.281 CC module/blob/bdev/blob_bdev.o 00:01:59.281 CC module/accel/iaa/accel_iaa.o 00:01:59.281 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.281 CC module/sock/posix/posix.o 00:01:59.281 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.281 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.281 CC module/accel/ioat/accel_ioat.o 00:01:59.281 CC module/accel/error/accel_error.o 00:01:59.281 CC module/accel/error/accel_error_rpc.o 00:01:59.281 CC module/keyring/linux/keyring.o 00:01:59.281 CC module/accel/dsa/accel_dsa.o 00:01:59.281 CC module/keyring/linux/keyring_rpc.o 00:01:59.281 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.542 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.542 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.542 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.542 SYMLINK libspdk_env_dpdk_rpc.so 00:01:59.542 LIB libspdk_scheduler_gscheduler.a 00:01:59.542 LIB libspdk_keyring_file.a 00:01:59.542 LIB libspdk_keyring_linux.a 00:01:59.542 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.542 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.542 LIB libspdk_accel_error.a 00:01:59.542 SO libspdk_keyring_file.so.1.0 00:01:59.542 LIB libspdk_accel_ioat.a 00:01:59.542 LIB libspdk_accel_iaa.a 00:01:59.542 LIB libspdk_scheduler_dynamic.a 00:01:59.542 SO libspdk_keyring_linux.so.1.0 00:01:59.542 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.803 SO libspdk_accel_ioat.so.6.0 00:01:59.803 SO libspdk_accel_error.so.2.0 00:01:59.803 SO libspdk_accel_iaa.so.3.0 00:01:59.803 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.803 SYMLINK libspdk_keyring_file.so 00:01:59.803 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.803 LIB libspdk_blob_bdev.a 00:01:59.803 LIB libspdk_accel_dsa.a 00:01:59.803 SYMLINK libspdk_keyring_linux.so 00:01:59.803 SO libspdk_blob_bdev.so.11.0 00:01:59.803 SYMLINK libspdk_accel_ioat.so 00:01:59.803 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.803 SO libspdk_accel_dsa.so.5.0 00:01:59.803 SYMLINK libspdk_accel_error.so 00:01:59.803 SYMLINK libspdk_accel_iaa.so 00:01:59.803 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.803 SYMLINK libspdk_blob_bdev.so 00:01:59.803 SYMLINK libspdk_accel_dsa.so 00:01:59.803 LIB libspdk_vfu_device.a 00:01:59.803 SO libspdk_vfu_device.so.3.0 00:02:00.064 SYMLINK libspdk_vfu_device.so 00:02:00.064 LIB libspdk_sock_posix.a 00:02:00.064 SO libspdk_sock_posix.so.6.0 00:02:00.064 SYMLINK libspdk_sock_posix.so 00:02:00.325 CC module/bdev/null/bdev_null.o 00:02:00.325 CC module/bdev/raid/bdev_raid.o 00:02:00.325 CC module/bdev/null/bdev_null_rpc.o 00:02:00.325 CC module/bdev/nvme/bdev_nvme.o 00:02:00.325 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.325 CC module/bdev/raid/bdev_raid_sb.o 00:02:00.325 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:00.325 CC module/bdev/raid/bdev_raid_rpc.o 00:02:00.325 CC module/bdev/nvme/nvme_rpc.o 00:02:00.325 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:00.325 CC module/bdev/nvme/bdev_mdns_client.o 00:02:00.325 CC module/bdev/raid/raid0.o 00:02:00.325 CC module/bdev/nvme/vbdev_opal.o 00:02:00.325 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:00.325 CC module/bdev/raid/raid1.o 00:02:00.325 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:00.325 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:00.325 CC module/bdev/raid/concat.o 00:02:00.325 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:00.325 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:00.325 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.325 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:00.325 CC module/bdev/split/vbdev_split.o 00:02:00.325 CC module/bdev/delay/vbdev_delay.o 00:02:00.325 CC module/bdev/passthru/vbdev_passthru.o 00:02:00.325 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:00.325 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:00.325 CC module/bdev/split/vbdev_split_rpc.o 00:02:00.325 CC module/bdev/gpt/gpt.o 00:02:00.325 CC module/bdev/gpt/vbdev_gpt.o 00:02:00.326 CC module/bdev/error/vbdev_error.o 00:02:00.326 CC module/bdev/error/vbdev_error_rpc.o 00:02:00.326 CC module/bdev/ftl/bdev_ftl.o 00:02:00.326 CC module/bdev/iscsi/bdev_iscsi.o 00:02:00.326 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:00.326 CC module/bdev/malloc/bdev_malloc.o 00:02:00.326 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.326 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:00.326 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:00.326 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:00.326 CC module/bdev/aio/bdev_aio.o 00:02:00.326 CC module/bdev/aio/bdev_aio_rpc.o 00:02:00.587 LIB libspdk_blobfs_bdev.a 00:02:00.587 LIB libspdk_bdev_split.a 00:02:00.587 LIB libspdk_bdev_null.a 00:02:00.587 LIB libspdk_bdev_gpt.a 00:02:00.587 SO libspdk_blobfs_bdev.so.6.0 00:02:00.587 LIB libspdk_bdev_error.a 00:02:00.587 SO libspdk_bdev_null.so.6.0 00:02:00.587 SO libspdk_bdev_split.so.6.0 00:02:00.587 LIB libspdk_bdev_passthru.a 00:02:00.587 SO libspdk_bdev_gpt.so.6.0 00:02:00.587 SO libspdk_bdev_error.so.6.0 00:02:00.587 LIB libspdk_bdev_ftl.a 00:02:00.587 LIB libspdk_bdev_delay.a 00:02:00.587 LIB libspdk_bdev_malloc.a 00:02:00.587 SO libspdk_bdev_passthru.so.6.0 00:02:00.587 SYMLINK libspdk_blobfs_bdev.so 00:02:00.587 SYMLINK libspdk_bdev_null.so 00:02:00.587 LIB libspdk_bdev_aio.a 00:02:00.587 LIB libspdk_bdev_iscsi.a 00:02:00.587 SYMLINK libspdk_bdev_split.so 00:02:00.587 LIB libspdk_bdev_zone_block.a 00:02:00.587 SO libspdk_bdev_ftl.so.6.0 00:02:00.587 SYMLINK libspdk_bdev_gpt.so 00:02:00.587 SO libspdk_bdev_delay.so.6.0 00:02:00.587 SO libspdk_bdev_malloc.so.6.0 00:02:00.587 SYMLINK libspdk_bdev_error.so 00:02:00.587 SYMLINK libspdk_bdev_passthru.so 00:02:00.848 SO libspdk_bdev_aio.so.6.0 00:02:00.848 SO libspdk_bdev_iscsi.so.6.0 00:02:00.848 SO libspdk_bdev_zone_block.so.6.0 00:02:00.848 SYMLINK libspdk_bdev_ftl.so 00:02:00.848 SYMLINK libspdk_bdev_delay.so 00:02:00.848 SYMLINK libspdk_bdev_malloc.so 00:02:00.848 SYMLINK libspdk_bdev_aio.so 00:02:00.848 SYMLINK libspdk_bdev_iscsi.so 00:02:00.848 SYMLINK libspdk_bdev_zone_block.so 00:02:00.849 LIB libspdk_bdev_lvol.a 00:02:00.849 LIB libspdk_bdev_virtio.a 00:02:00.849 SO libspdk_bdev_lvol.so.6.0 00:02:00.849 SO libspdk_bdev_virtio.so.6.0 00:02:00.849 SYMLINK libspdk_bdev_lvol.so 00:02:00.849 SYMLINK libspdk_bdev_virtio.so 00:02:01.110 LIB libspdk_bdev_raid.a 00:02:01.110 SO libspdk_bdev_raid.so.6.0 00:02:01.371 SYMLINK libspdk_bdev_raid.so 00:02:02.314 LIB libspdk_bdev_nvme.a 00:02:02.314 SO libspdk_bdev_nvme.so.7.0 00:02:02.314 SYMLINK libspdk_bdev_nvme.so 00:02:03.258 CC module/event/subsystems/sock/sock.o 00:02:03.258 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.258 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.258 CC module/event/subsystems/vmd/vmd.o 00:02:03.258 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.258 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.258 CC module/event/subsystems/keyring/keyring.o 00:02:03.258 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.258 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.258 LIB libspdk_event_sock.a 00:02:03.258 LIB libspdk_event_scheduler.a 00:02:03.258 LIB libspdk_event_keyring.a 00:02:03.258 LIB libspdk_event_vmd.a 00:02:03.258 LIB libspdk_event_vhost_blk.a 00:02:03.258 LIB libspdk_event_iobuf.a 00:02:03.258 LIB libspdk_event_vfu_tgt.a 00:02:03.258 SO libspdk_event_sock.so.5.0 00:02:03.258 SO libspdk_event_keyring.so.1.0 00:02:03.258 SO libspdk_event_scheduler.so.4.0 00:02:03.258 SO libspdk_event_vhost_blk.so.3.0 00:02:03.258 SO libspdk_event_vmd.so.6.0 00:02:03.258 SO libspdk_event_iobuf.so.3.0 00:02:03.258 SO libspdk_event_vfu_tgt.so.3.0 00:02:03.258 SYMLINK libspdk_event_keyring.so 00:02:03.258 SYMLINK libspdk_event_sock.so 00:02:03.258 SYMLINK libspdk_event_vhost_blk.so 00:02:03.258 SYMLINK libspdk_event_scheduler.so 00:02:03.258 SYMLINK libspdk_event_vfu_tgt.so 00:02:03.258 SYMLINK libspdk_event_vmd.so 00:02:03.258 SYMLINK libspdk_event_iobuf.so 00:02:03.900 CC module/event/subsystems/accel/accel.o 00:02:03.900 LIB libspdk_event_accel.a 00:02:03.900 SO libspdk_event_accel.so.6.0 00:02:03.900 SYMLINK libspdk_event_accel.so 00:02:04.162 CC module/event/subsystems/bdev/bdev.o 00:02:04.424 LIB libspdk_event_bdev.a 00:02:04.424 SO libspdk_event_bdev.so.6.0 00:02:04.685 SYMLINK libspdk_event_bdev.so 00:02:04.947 CC module/event/subsystems/scsi/scsi.o 00:02:04.947 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.947 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.947 CC module/event/subsystems/nbd/nbd.o 00:02:04.947 CC module/event/subsystems/ublk/ublk.o 00:02:05.208 LIB libspdk_event_ublk.a 00:02:05.208 LIB libspdk_event_scsi.a 00:02:05.208 LIB libspdk_event_nbd.a 00:02:05.208 SO libspdk_event_ublk.so.3.0 00:02:05.208 SO libspdk_event_scsi.so.6.0 00:02:05.208 LIB libspdk_event_nvmf.a 00:02:05.208 SO libspdk_event_nbd.so.6.0 00:02:05.208 SO libspdk_event_nvmf.so.6.0 00:02:05.208 SYMLINK libspdk_event_ublk.so 00:02:05.208 SYMLINK libspdk_event_scsi.so 00:02:05.208 SYMLINK libspdk_event_nbd.so 00:02:05.208 SYMLINK libspdk_event_nvmf.so 00:02:05.469 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:05.469 CC module/event/subsystems/iscsi/iscsi.o 00:02:05.730 LIB libspdk_event_vhost_scsi.a 00:02:05.730 LIB libspdk_event_iscsi.a 00:02:05.730 SO libspdk_event_vhost_scsi.so.3.0 00:02:05.730 SO libspdk_event_iscsi.so.6.0 00:02:05.730 SYMLINK libspdk_event_vhost_scsi.so 00:02:05.730 SYMLINK libspdk_event_iscsi.so 00:02:05.992 SO libspdk.so.6.0 00:02:05.992 SYMLINK libspdk.so 00:02:06.562 CC app/spdk_top/spdk_top.o 00:02:06.562 CXX app/trace/trace.o 00:02:06.562 CC app/trace_record/trace_record.o 00:02:06.562 CC app/spdk_nvme_perf/perf.o 00:02:06.562 CC app/spdk_nvme_identify/identify.o 00:02:06.562 CC app/spdk_lspci/spdk_lspci.o 00:02:06.562 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.562 CC app/spdk_dd/spdk_dd.o 00:02:06.562 CC app/nvmf_tgt/nvmf_main.o 00:02:06.562 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.562 TEST_HEADER include/spdk/accel.h 00:02:06.562 CC app/vhost/vhost.o 00:02:06.562 TEST_HEADER include/spdk/accel_module.h 00:02:06.562 TEST_HEADER include/spdk/base64.h 00:02:06.562 CC test/rpc_client/rpc_client_test.o 00:02:06.562 TEST_HEADER include/spdk/assert.h 00:02:06.562 TEST_HEADER include/spdk/barrier.h 00:02:06.562 TEST_HEADER include/spdk/bit_array.h 00:02:06.562 TEST_HEADER include/spdk/bdev_module.h 00:02:06.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.562 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.562 TEST_HEADER include/spdk/bdev.h 00:02:06.562 TEST_HEADER include/spdk/bit_pool.h 00:02:06.562 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.562 TEST_HEADER include/spdk/blobfs.h 00:02:06.562 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.562 TEST_HEADER include/spdk/config.h 00:02:06.562 TEST_HEADER include/spdk/conf.h 00:02:06.562 TEST_HEADER include/spdk/cpuset.h 00:02:06.562 TEST_HEADER include/spdk/crc16.h 00:02:06.562 TEST_HEADER include/spdk/blob.h 00:02:06.562 TEST_HEADER include/spdk/crc32.h 00:02:06.562 TEST_HEADER include/spdk/crc64.h 00:02:06.562 TEST_HEADER include/spdk/dif.h 00:02:06.562 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.562 TEST_HEADER include/spdk/dma.h 00:02:06.562 TEST_HEADER include/spdk/endian.h 00:02:06.562 TEST_HEADER include/spdk/event.h 00:02:06.562 TEST_HEADER include/spdk/fd_group.h 00:02:06.562 TEST_HEADER include/spdk/fd.h 00:02:06.562 TEST_HEADER include/spdk/env.h 00:02:06.562 TEST_HEADER include/spdk/file.h 00:02:06.562 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.562 TEST_HEADER include/spdk/histogram_data.h 00:02:06.562 CC app/spdk_tgt/spdk_tgt.o 00:02:06.562 TEST_HEADER include/spdk/idxd.h 00:02:06.562 TEST_HEADER include/spdk/ftl.h 00:02:06.562 TEST_HEADER include/spdk/hexlify.h 00:02:06.562 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.562 TEST_HEADER include/spdk/ioat.h 00:02:06.562 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.562 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.563 TEST_HEADER include/spdk/init.h 00:02:06.563 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.563 TEST_HEADER include/spdk/json.h 00:02:06.563 TEST_HEADER include/spdk/keyring_module.h 00:02:06.563 TEST_HEADER include/spdk/keyring.h 00:02:06.563 TEST_HEADER include/spdk/log.h 00:02:06.563 TEST_HEADER include/spdk/lvol.h 00:02:06.563 TEST_HEADER include/spdk/likely.h 00:02:06.563 TEST_HEADER include/spdk/memory.h 00:02:06.563 TEST_HEADER include/spdk/notify.h 00:02:06.563 TEST_HEADER include/spdk/mmio.h 00:02:06.563 TEST_HEADER include/spdk/nvme.h 00:02:06.563 TEST_HEADER include/spdk/nbd.h 00:02:06.563 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.563 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.563 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.563 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.563 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.563 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.563 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.563 TEST_HEADER include/spdk/nvmf.h 00:02:06.563 TEST_HEADER include/spdk/opal.h 00:02:06.563 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.563 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.563 TEST_HEADER include/spdk/pci_ids.h 00:02:06.563 TEST_HEADER include/spdk/pipe.h 00:02:06.563 CC test/nvme/reset/reset.o 00:02:06.563 TEST_HEADER include/spdk/reduce.h 00:02:06.563 TEST_HEADER include/spdk/opal_spec.h 00:02:06.563 TEST_HEADER include/spdk/scsi.h 00:02:06.563 TEST_HEADER include/spdk/rpc.h 00:02:06.563 TEST_HEADER include/spdk/queue.h 00:02:06.563 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.563 TEST_HEADER include/spdk/sock.h 00:02:06.563 TEST_HEADER include/spdk/scheduler.h 00:02:06.563 TEST_HEADER include/spdk/stdinc.h 00:02:06.563 TEST_HEADER include/spdk/string.h 00:02:06.563 TEST_HEADER include/spdk/thread.h 00:02:06.563 CC examples/ioat/verify/verify.o 00:02:06.563 TEST_HEADER include/spdk/ublk.h 00:02:06.563 CC examples/util/zipf/zipf.o 00:02:06.563 TEST_HEADER include/spdk/trace.h 00:02:06.563 TEST_HEADER include/spdk/util.h 00:02:06.563 TEST_HEADER include/spdk/trace_parser.h 00:02:06.563 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.563 TEST_HEADER include/spdk/tree.h 00:02:06.563 TEST_HEADER include/spdk/version.h 00:02:06.563 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.563 TEST_HEADER include/spdk/uuid.h 00:02:06.563 CC test/nvme/aer/aer.o 00:02:06.563 CC test/nvme/connect_stress/connect_stress.o 00:02:06.563 TEST_HEADER include/spdk/vhost.h 00:02:06.563 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.563 CC test/thread/poller_perf/poller_perf.o 00:02:06.563 CC test/app/jsoncat/jsoncat.o 00:02:06.563 TEST_HEADER include/spdk/xor.h 00:02:06.563 TEST_HEADER include/spdk/vmd.h 00:02:06.563 CC examples/accel/perf/accel_perf.o 00:02:06.563 CC test/nvme/simple_copy/simple_copy.o 00:02:06.563 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.563 CXX test/cpp_headers/accel.o 00:02:06.563 TEST_HEADER include/spdk/zipf.h 00:02:06.563 CC test/event/app_repeat/app_repeat.o 00:02:06.563 CC examples/thread/thread/thread_ex.o 00:02:06.563 CC examples/nvme/reconnect/reconnect.o 00:02:06.563 LINK spdk_lspci 00:02:06.563 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.563 CXX test/cpp_headers/assert.o 00:02:06.563 CC test/app/histogram_perf/histogram_perf.o 00:02:06.563 CXX test/cpp_headers/accel_module.o 00:02:06.563 CXX test/cpp_headers/barrier.o 00:02:06.563 CC examples/nvme/arbitration/arbitration.o 00:02:06.563 CC test/accel/dif/dif.o 00:02:06.563 CXX test/cpp_headers/base64.o 00:02:06.563 CXX test/cpp_headers/bdev.o 00:02:06.563 CXX test/cpp_headers/bdev_module.o 00:02:06.563 CC test/app/stub/stub.o 00:02:06.563 CC test/event/reactor/reactor.o 00:02:06.563 CC examples/sock/hello_world/hello_sock.o 00:02:06.563 CC examples/nvme/hotplug/hotplug.o 00:02:06.563 CC test/nvme/err_injection/err_injection.o 00:02:06.563 CC examples/idxd/perf/perf.o 00:02:06.563 CC test/event/event_perf/event_perf.o 00:02:06.563 CXX test/cpp_headers/bit_array.o 00:02:06.563 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.563 CXX test/cpp_headers/bdev_zone.o 00:02:06.827 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:06.827 CXX test/cpp_headers/blob_bdev.o 00:02:06.827 CXX test/cpp_headers/conf.o 00:02:06.827 CXX test/cpp_headers/bit_pool.o 00:02:06.827 CXX test/cpp_headers/config.o 00:02:06.827 CXX test/cpp_headers/cpuset.o 00:02:06.827 CC examples/nvmf/nvmf/nvmf.o 00:02:06.827 CXX test/cpp_headers/crc32.o 00:02:06.827 CXX test/cpp_headers/blobfs.o 00:02:06.827 CXX test/cpp_headers/blob.o 00:02:06.827 CXX test/cpp_headers/crc16.o 00:02:06.827 CC test/nvme/compliance/nvme_compliance.o 00:02:06.827 CXX test/cpp_headers/crc64.o 00:02:06.827 CXX test/cpp_headers/dif.o 00:02:06.827 CC examples/nvme/abort/abort.o 00:02:06.827 CC test/blobfs/mkfs/mkfs.o 00:02:06.827 CC test/nvme/fdp/fdp.o 00:02:06.827 CXX test/cpp_headers/dma.o 00:02:06.827 CXX test/cpp_headers/env.o 00:02:06.827 CXX test/cpp_headers/endian.o 00:02:06.827 CXX test/cpp_headers/env_dpdk.o 00:02:06.827 CC test/nvme/overhead/overhead.o 00:02:06.827 CXX test/cpp_headers/gpt_spec.o 00:02:06.827 CC examples/ioat/perf/perf.o 00:02:06.827 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.827 CXX test/cpp_headers/event.o 00:02:06.827 CXX test/cpp_headers/hexlify.o 00:02:06.827 CC test/event/reactor_perf/reactor_perf.o 00:02:06.827 CC examples/bdev/bdevperf/bdevperf.o 00:02:06.827 CC test/env/memory/memory_ut.o 00:02:06.827 CXX test/cpp_headers/histogram_data.o 00:02:06.827 CXX test/cpp_headers/fd_group.o 00:02:06.827 CC test/nvme/boot_partition/boot_partition.o 00:02:06.827 CXX test/cpp_headers/fd.o 00:02:06.827 CC test/env/pci/pci_ut.o 00:02:06.827 CXX test/cpp_headers/file.o 00:02:06.827 CC examples/vmd/led/led.o 00:02:06.827 CC test/nvme/e2edp/nvme_dp.o 00:02:06.827 CXX test/cpp_headers/ftl.o 00:02:06.827 CXX test/cpp_headers/idxd.o 00:02:06.827 CXX test/cpp_headers/idxd_spec.o 00:02:06.827 CC examples/blob/cli/blobcli.o 00:02:06.827 CC test/app/bdev_svc/bdev_svc.o 00:02:06.827 CXX test/cpp_headers/ioat.o 00:02:06.827 CXX test/cpp_headers/init.o 00:02:06.827 CC test/env/vtophys/vtophys.o 00:02:06.827 CXX test/cpp_headers/ioat_spec.o 00:02:06.827 CXX test/cpp_headers/iscsi_spec.o 00:02:06.827 CXX test/cpp_headers/json.o 00:02:06.827 CC test/nvme/sgl/sgl.o 00:02:06.827 CC examples/blob/hello_world/hello_blob.o 00:02:06.827 CXX test/cpp_headers/jsonrpc.o 00:02:06.827 CC test/dma/test_dma/test_dma.o 00:02:06.827 CXX test/cpp_headers/keyring_module.o 00:02:06.827 LINK spdk_nvme_discover 00:02:06.827 CXX test/cpp_headers/likely.o 00:02:06.827 CXX test/cpp_headers/mmio.o 00:02:06.827 CXX test/cpp_headers/keyring.o 00:02:06.827 CXX test/cpp_headers/log.o 00:02:06.827 CXX test/cpp_headers/lvol.o 00:02:06.827 CXX test/cpp_headers/memory.o 00:02:06.827 CXX test/cpp_headers/nbd.o 00:02:06.827 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:06.827 CXX test/cpp_headers/notify.o 00:02:06.827 CC test/nvme/cuse/cuse.o 00:02:06.827 CC examples/nvme/hello_world/hello_world.o 00:02:06.827 CXX test/cpp_headers/nvme.o 00:02:06.827 CC test/nvme/startup/startup.o 00:02:06.827 LINK rpc_client_test 00:02:06.827 CXX test/cpp_headers/nvme_intel.o 00:02:06.827 LINK nvmf_tgt 00:02:06.827 CXX test/cpp_headers/nvme_ocssd.o 00:02:06.827 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:06.827 CC test/nvme/reserve/reserve.o 00:02:06.827 CXX test/cpp_headers/nvme_spec.o 00:02:06.827 CXX test/cpp_headers/nvme_zns.o 00:02:06.827 CXX test/cpp_headers/nvmf_cmd.o 00:02:06.827 CXX test/cpp_headers/opal_spec.o 00:02:06.827 CXX test/cpp_headers/nvmf.o 00:02:06.827 LINK vhost 00:02:06.827 CXX test/cpp_headers/nvmf_spec.o 00:02:06.827 CXX test/cpp_headers/nvmf_transport.o 00:02:06.827 CXX test/cpp_headers/pipe.o 00:02:06.827 CXX test/cpp_headers/opal.o 00:02:06.827 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.827 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.827 CXX test/cpp_headers/rpc.o 00:02:06.827 CXX test/cpp_headers/pci_ids.o 00:02:06.827 CXX test/cpp_headers/queue.o 00:02:06.827 CXX test/cpp_headers/reduce.o 00:02:06.827 CC app/fio/nvme/fio_plugin.o 00:02:06.827 CC examples/bdev/hello_world/hello_bdev.o 00:02:06.827 CC test/event/scheduler/scheduler.o 00:02:06.827 LINK spdk_trace_record 00:02:06.827 CC test/bdev/bdevio/bdevio.o 00:02:06.827 LINK interrupt_tgt 00:02:07.087 CC app/fio/bdev/fio_plugin.o 00:02:07.087 CXX test/cpp_headers/scheduler.o 00:02:07.087 CXX test/cpp_headers/scsi.o 00:02:07.087 LINK histogram_perf 00:02:07.087 LINK spdk_dd 00:02:07.087 LINK jsoncat 00:02:07.087 LINK connect_stress 00:02:07.087 CXX test/cpp_headers/scsi_spec.o 00:02:07.087 LINK pmr_persistence 00:02:07.087 LINK poller_perf 00:02:07.087 LINK event_perf 00:02:07.087 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.087 LINK stub 00:02:07.087 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.087 LINK spdk_tgt 00:02:07.087 LINK err_injection 00:02:07.087 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.346 LINK doorbell_aers 00:02:07.346 LINK fused_ordering 00:02:07.346 LINK verify 00:02:07.346 LINK vtophys 00:02:07.346 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.346 CXX test/cpp_headers/sock.o 00:02:07.346 LINK cmb_copy 00:02:07.346 CXX test/cpp_headers/stdinc.o 00:02:07.346 CC test/lvol/esnap/esnap.o 00:02:07.346 LINK mkfs 00:02:07.346 LINK spdk_trace 00:02:07.346 LINK hello_sock 00:02:07.346 CXX test/cpp_headers/string.o 00:02:07.346 CXX test/cpp_headers/thread.o 00:02:07.346 CXX test/cpp_headers/trace.o 00:02:07.346 CXX test/cpp_headers/trace_parser.o 00:02:07.346 LINK aer 00:02:07.346 LINK hotplug 00:02:07.346 CXX test/cpp_headers/tree.o 00:02:07.346 CXX test/cpp_headers/ublk.o 00:02:07.346 CXX test/cpp_headers/util.o 00:02:07.346 LINK thread 00:02:07.346 CXX test/cpp_headers/uuid.o 00:02:07.346 CXX test/cpp_headers/version.o 00:02:07.346 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.346 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.346 CXX test/cpp_headers/vhost.o 00:02:07.346 LINK hello_blob 00:02:07.346 CXX test/cpp_headers/vmd.o 00:02:07.346 CXX test/cpp_headers/xor.o 00:02:07.346 CXX test/cpp_headers/zipf.o 00:02:07.346 LINK idxd_perf 00:02:07.346 LINK nvmf 00:02:07.346 LINK reconnect 00:02:07.346 LINK nvme_compliance 00:02:07.346 LINK abort 00:02:07.605 LINK fdp 00:02:07.605 LINK accel_perf 00:02:07.605 LINK iscsi_tgt 00:02:07.605 LINK reactor 00:02:07.605 LINK dif 00:02:07.605 LINK led 00:02:07.605 LINK spdk_nvme_perf 00:02:07.605 LINK spdk_nvme_identify 00:02:07.605 LINK app_repeat 00:02:07.605 LINK boot_partition 00:02:07.605 LINK zipf 00:02:07.605 LINK lsvmd 00:02:07.605 LINK blobcli 00:02:07.605 LINK ioat_perf 00:02:07.605 LINK spdk_top 00:02:07.605 LINK bdev_svc 00:02:07.605 LINK env_dpdk_post_init 00:02:07.865 LINK scheduler 00:02:07.865 LINK reactor_perf 00:02:07.865 LINK nvme_fuzz 00:02:07.865 LINK nvme_dp 00:02:07.865 LINK simple_copy 00:02:07.865 LINK startup 00:02:07.865 LINK reset 00:02:07.865 LINK overhead 00:02:07.865 LINK spdk_nvme 00:02:07.865 LINK sgl 00:02:07.865 LINK reserve 00:02:07.865 LINK test_dma 00:02:07.865 LINK mem_callbacks 00:02:07.865 LINK bdevperf 00:02:07.865 LINK hello_world 00:02:07.865 LINK vhost_fuzz 00:02:07.865 LINK hello_bdev 00:02:07.865 LINK bdevio 00:02:08.125 LINK arbitration 00:02:08.125 LINK pci_ut 00:02:08.125 LINK nvme_manage 00:02:08.125 LINK spdk_bdev 00:02:08.387 LINK cuse 00:02:08.647 LINK memory_ut 00:02:08.908 LINK iscsi_fuzz 00:02:12.209 LINK esnap 00:02:12.209 00:02:12.209 real 0m50.276s 00:02:12.209 user 6m37.305s 00:02:12.209 sys 4m50.899s 00:02:12.209 12:06:17 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:12.209 12:06:17 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.209 ************************************ 00:02:12.209 END TEST make 00:02:12.209 ************************************ 00:02:12.209 12:06:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.209 12:06:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.209 12:06:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.209 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.209 12:06:17 -- pm/common@44 -- $ pid=294887 00:02:12.209 12:06:17 -- pm/common@50 -- $ kill -TERM 294887 00:02:12.209 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.209 12:06:17 -- pm/common@44 -- $ pid=294888 00:02:12.209 12:06:17 -- pm/common@50 -- $ kill -TERM 294888 00:02:12.209 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.209 12:06:17 -- pm/common@44 -- $ pid=294891 00:02:12.209 12:06:17 -- pm/common@50 -- $ kill -TERM 294891 00:02:12.209 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.209 12:06:17 -- pm/common@44 -- $ pid=294915 00:02:12.209 12:06:17 -- pm/common@50 -- $ sudo -E kill -TERM 294915 00:02:12.209 12:06:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.209 12:06:17 -- nvmf/common.sh@7 -- # uname -s 00:02:12.209 12:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.209 12:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.209 12:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.209 12:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.209 12:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.209 12:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.209 12:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.209 12:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.209 12:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.209 12:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.209 12:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:12.209 12:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:12.209 12:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.209 12:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.209 12:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.209 12:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.209 12:06:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.209 12:06:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.209 12:06:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.209 12:06:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.209 12:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.209 12:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.209 12:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.209 12:06:17 -- paths/export.sh@5 -- # export PATH 00:02:12.209 12:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.209 12:06:17 -- nvmf/common.sh@47 -- # : 0 00:02:12.209 12:06:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.209 12:06:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.209 12:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.209 12:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.209 12:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.209 12:06:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.209 12:06:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.209 12:06:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.209 12:06:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.209 12:06:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.209 12:06:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.209 12:06:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.209 12:06:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.209 12:06:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.209 12:06:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.209 12:06:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.209 12:06:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.209 12:06:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.209 12:06:17 -- spdk/autotest.sh@48 -- # udevadm_pid=358479 00:02:12.209 12:06:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.209 12:06:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.209 12:06:17 -- pm/common@17 -- # local monitor 00:02:12.209 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@21 -- # date +%s 00:02:12.209 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.209 12:06:17 -- pm/common@21 -- # date +%s 00:02:12.209 12:06:17 -- pm/common@25 -- # sleep 1 00:02:12.209 12:06:17 -- pm/common@21 -- # date +%s 00:02:12.209 12:06:17 -- pm/common@21 -- # date +%s 00:02:12.209 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013977 00:02:12.209 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013977 00:02:12.209 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013977 00:02:12.209 12:06:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013977 00:02:12.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013977_collect-vmstat.pm.log 00:02:12.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013977_collect-cpu-load.pm.log 00:02:12.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013977_collect-cpu-temp.pm.log 00:02:12.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013977_collect-bmc-pm.bmc.pm.log 00:02:13.416 12:06:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.416 12:06:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.416 12:06:18 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:13.416 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:02:13.416 12:06:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.416 12:06:18 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:13.416 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:02:13.416 12:06:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:13.416 12:06:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.416 12:06:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.416 12:06:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.416 12:06:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.416 12:06:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.416 12:06:18 -- common/autotest_common.sh@1454 -- # uname 00:02:13.416 12:06:18 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:13.416 12:06:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.416 12:06:18 -- common/autotest_common.sh@1474 -- # uname 00:02:13.416 12:06:18 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:13.416 12:06:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.416 12:06:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.416 12:06:18 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.416 12:06:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.416 12:06:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:13.416 --rc lcov_branch_coverage=1 00:02:13.416 --rc lcov_function_coverage=1 00:02:13.416 --rc genhtml_branch_coverage=1 00:02:13.416 --rc genhtml_function_coverage=1 00:02:13.416 --rc genhtml_legend=1 00:02:13.416 --rc geninfo_all_blocks=1 00:02:13.416 ' 00:02:13.416 12:06:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:13.416 --rc lcov_branch_coverage=1 00:02:13.416 --rc lcov_function_coverage=1 00:02:13.416 --rc genhtml_branch_coverage=1 00:02:13.416 --rc genhtml_function_coverage=1 00:02:13.416 --rc genhtml_legend=1 00:02:13.416 --rc geninfo_all_blocks=1 00:02:13.416 ' 00:02:13.416 12:06:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:13.416 --rc lcov_branch_coverage=1 00:02:13.416 --rc lcov_function_coverage=1 00:02:13.416 --rc genhtml_branch_coverage=1 00:02:13.416 --rc genhtml_function_coverage=1 00:02:13.416 --rc genhtml_legend=1 00:02:13.416 --rc geninfo_all_blocks=1 00:02:13.416 --no-external' 00:02:13.416 12:06:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:13.416 --rc lcov_branch_coverage=1 00:02:13.416 --rc lcov_function_coverage=1 00:02:13.416 --rc genhtml_branch_coverage=1 00:02:13.416 --rc genhtml_function_coverage=1 00:02:13.416 --rc genhtml_legend=1 00:02:13.416 --rc geninfo_all_blocks=1 00:02:13.416 --no-external' 00:02:13.416 12:06:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.416 lcov: LCOV version 1.14 00:02:13.416 12:06:18 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:21.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:21.553 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:33.801 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:33.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:33.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:33.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:33.803 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:33.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:33.803 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:33.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:33.803 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:33.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:33.803 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:34.063 12:06:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:34.063 12:06:39 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:34.063 12:06:39 -- common/autotest_common.sh@10 -- # set +x 00:02:34.063 12:06:39 -- spdk/autotest.sh@91 -- # rm -f 00:02:34.063 12:06:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.272 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:38.273 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:38.273 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:38.273 12:06:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:38.273 12:06:43 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:38.273 12:06:43 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:38.273 12:06:43 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:38.273 12:06:43 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:38.273 12:06:43 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:38.273 12:06:43 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:38.273 12:06:43 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.273 12:06:43 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:38.273 12:06:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:38.273 12:06:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.273 12:06:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.273 12:06:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:38.273 12:06:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:38.273 12:06:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.273 No valid GPT data, bailing 00:02:38.273 12:06:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.273 12:06:43 -- scripts/common.sh@391 -- # pt= 00:02:38.273 12:06:43 -- scripts/common.sh@392 -- # return 1 00:02:38.273 12:06:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.273 1+0 records in 00:02:38.273 1+0 records out 00:02:38.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062832 s, 167 MB/s 00:02:38.273 12:06:43 -- spdk/autotest.sh@118 -- # sync 00:02:38.273 12:06:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.273 12:06:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.273 12:06:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.414 12:06:50 -- spdk/autotest.sh@124 -- # uname -s 00:02:46.414 12:06:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:46.414 12:06:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.414 12:06:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:46.414 12:06:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:46.414 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:02:46.414 ************************************ 00:02:46.414 START TEST setup.sh 00:02:46.414 ************************************ 00:02:46.414 12:06:50 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.414 * Looking for test storage... 00:02:46.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.414 12:06:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:46.414 12:06:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:46.414 12:06:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.414 12:06:50 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:46.414 12:06:50 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:46.414 12:06:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.414 ************************************ 00:02:46.414 START TEST acl 00:02:46.414 ************************************ 00:02:46.414 12:06:50 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.414 * Looking for test storage... 00:02:46.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.414 12:06:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.414 12:06:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:46.414 12:06:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:46.414 12:06:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:46.414 12:06:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:46.414 12:06:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:46.415 12:06:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:46.415 12:06:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.415 12:06:51 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.720 12:06:55 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:49.720 12:06:55 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:49.720 12:06:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.720 12:06:55 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:49.720 12:06:55 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.720 12:06:55 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:53.931 Hugepages 00:02:53.931 node hugesize free / total 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 00:02:53.931 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:53.931 12:06:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:53.931 12:06:58 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:53.931 12:06:58 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:53.931 12:06:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.931 ************************************ 00:02:53.931 START TEST denied 00:02:53.931 ************************************ 00:02:53.931 12:06:58 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:02:53.931 12:06:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:53.931 12:06:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:53.931 12:06:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:53.931 12:06:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.931 12:06:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.143 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:58.143 12:07:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:58.144 12:07:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:58.144 12:07:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.144 12:07:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.395 00:03:02.395 real 0m8.830s 00:03:02.395 user 0m2.900s 00:03:02.395 sys 0m5.267s 00:03:02.395 12:07:07 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:02.395 12:07:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:02.395 ************************************ 00:03:02.395 END TEST denied 00:03:02.395 ************************************ 00:03:02.395 12:07:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.395 12:07:07 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:02.395 12:07:07 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:02.395 12:07:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.395 ************************************ 00:03:02.395 START TEST allowed 00:03:02.395 ************************************ 00:03:02.395 12:07:07 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:02.395 12:07:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:02.395 12:07:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:02.395 12:07:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:02.395 12:07:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.395 12:07:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.981 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:08.981 12:07:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:08.981 12:07:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:08.981 12:07:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:08.981 12:07:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.981 12:07:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.284 00:03:12.284 real 0m9.773s 00:03:12.284 user 0m2.945s 00:03:12.284 sys 0m5.187s 00:03:12.284 12:07:17 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:12.284 12:07:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:12.284 ************************************ 00:03:12.284 END TEST allowed 00:03:12.284 ************************************ 00:03:12.284 00:03:12.284 real 0m26.754s 00:03:12.284 user 0m8.960s 00:03:12.284 sys 0m15.678s 00:03:12.284 12:07:17 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:12.284 12:07:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:12.284 ************************************ 00:03:12.284 END TEST acl 00:03:12.284 ************************************ 00:03:12.284 12:07:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.284 12:07:17 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:12.284 12:07:17 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:12.284 12:07:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.284 ************************************ 00:03:12.284 START TEST hugepages 00:03:12.284 ************************************ 00:03:12.284 12:07:17 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.284 * Looking for test storage... 00:03:12.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 106331680 kB' 'MemAvailable: 110578296 kB' 'Buffers: 3736 kB' 'Cached: 11268384 kB' 'SwapCached: 0 kB' 'Active: 7317272 kB' 'Inactive: 4480064 kB' 'Active(anon): 6921912 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528612 kB' 'Mapped: 219744 kB' 'Shmem: 6396696 kB' 'KReclaimable: 391044 kB' 'Slab: 1160252 kB' 'SReclaimable: 391044 kB' 'SUnreclaim: 769208 kB' 'KernelStack: 27200 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460856 kB' 'Committed_AS: 8365180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237200 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.284 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.285 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.547 12:07:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.547 12:07:17 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:12.548 12:07:17 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:12.548 12:07:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.548 ************************************ 00:03:12.548 START TEST default_setup 00:03:12.548 ************************************ 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.548 12:07:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.761 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:16.761 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108499440 kB' 'MemAvailable: 112745992 kB' 'Buffers: 3736 kB' 'Cached: 11268504 kB' 'SwapCached: 0 kB' 'Active: 7333896 kB' 'Inactive: 4480064 kB' 'Active(anon): 6938536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544620 kB' 'Mapped: 220120 kB' 'Shmem: 6396816 kB' 'KReclaimable: 390916 kB' 'Slab: 1158040 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767124 kB' 'KernelStack: 27264 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8382412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237360 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.761 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.762 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108499740 kB' 'MemAvailable: 112746292 kB' 'Buffers: 3736 kB' 'Cached: 11268508 kB' 'SwapCached: 0 kB' 'Active: 7333000 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937640 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544168 kB' 'Mapped: 220016 kB' 'Shmem: 6396820 kB' 'KReclaimable: 390916 kB' 'Slab: 1157984 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767068 kB' 'KernelStack: 27248 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8382432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.763 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108500336 kB' 'MemAvailable: 112746888 kB' 'Buffers: 3736 kB' 'Cached: 11268524 kB' 'SwapCached: 0 kB' 'Active: 7333500 kB' 'Inactive: 4480064 kB' 'Active(anon): 6938140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544632 kB' 'Mapped: 220520 kB' 'Shmem: 6396836 kB' 'KReclaimable: 390916 kB' 'Slab: 1157984 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767068 kB' 'KernelStack: 27248 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8383940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.764 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.765 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.766 nr_hugepages=1024 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.766 resv_hugepages=0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.766 surplus_hugepages=0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.766 anon_hugepages=0 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108493140 kB' 'MemAvailable: 112739692 kB' 'Buffers: 3736 kB' 'Cached: 11268544 kB' 'SwapCached: 0 kB' 'Active: 7338176 kB' 'Inactive: 4480064 kB' 'Active(anon): 6942816 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549320 kB' 'Mapped: 220520 kB' 'Shmem: 6396856 kB' 'KReclaimable: 390916 kB' 'Slab: 1157984 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767068 kB' 'KernelStack: 27248 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8388592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237332 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.766 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.767 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59535940 kB' 'MemUsed: 6123068 kB' 'SwapCached: 0 kB' 'Active: 1766544 kB' 'Inactive: 142704 kB' 'Active(anon): 1523252 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1731528 kB' 'Mapped: 103304 kB' 'AnonPages: 181020 kB' 'Shmem: 1345532 kB' 'KernelStack: 14088 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 512072 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 352536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.768 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.769 node0=1024 expecting 1024 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.769 00:03:16.769 real 0m4.033s 00:03:16.769 user 0m1.584s 00:03:16.769 sys 0m2.448s 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:16.769 12:07:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:16.769 ************************************ 00:03:16.769 END TEST default_setup 00:03:16.770 ************************************ 00:03:16.770 12:07:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.770 12:07:22 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:16.770 12:07:22 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:16.770 12:07:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.770 ************************************ 00:03:16.770 START TEST per_node_1G_alloc 00:03:16.770 ************************************ 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.770 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:16.771 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.771 12:07:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.077 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.077 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.077 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.077 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.077 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:20.343 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.343 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.344 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.344 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.344 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.344 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108498972 kB' 'MemAvailable: 112745524 kB' 'Buffers: 3736 kB' 'Cached: 11268664 kB' 'SwapCached: 0 kB' 'Active: 7333008 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543936 kB' 'Mapped: 219060 kB' 'Shmem: 6396976 kB' 'KReclaimable: 390916 kB' 'Slab: 1158100 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767184 kB' 'KernelStack: 27264 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8376304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237472 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.344 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108497932 kB' 'MemAvailable: 112744484 kB' 'Buffers: 3736 kB' 'Cached: 11268668 kB' 'SwapCached: 0 kB' 'Active: 7332908 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937548 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543300 kB' 'Mapped: 219052 kB' 'Shmem: 6396980 kB' 'KReclaimable: 390916 kB' 'Slab: 1158096 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767180 kB' 'KernelStack: 27264 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8376324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237440 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.345 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.346 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108498760 kB' 'MemAvailable: 112745312 kB' 'Buffers: 3736 kB' 'Cached: 11268684 kB' 'SwapCached: 0 kB' 'Active: 7332272 kB' 'Inactive: 4480064 kB' 'Active(anon): 6936912 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543100 kB' 'Mapped: 218976 kB' 'Shmem: 6396996 kB' 'KReclaimable: 390916 kB' 'Slab: 1158100 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767184 kB' 'KernelStack: 27328 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8374740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237440 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.347 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.348 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.349 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.350 nr_hugepages=1024 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.350 resv_hugepages=0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.350 surplus_hugepages=0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.350 anon_hugepages=0 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.350 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108497500 kB' 'MemAvailable: 112744052 kB' 'Buffers: 3736 kB' 'Cached: 11268708 kB' 'SwapCached: 0 kB' 'Active: 7332360 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937000 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543132 kB' 'Mapped: 218976 kB' 'Shmem: 6397020 kB' 'KReclaimable: 390916 kB' 'Slab: 1158100 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767184 kB' 'KernelStack: 27264 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8376368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237456 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.351 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.352 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.353 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60585500 kB' 'MemUsed: 5073508 kB' 'SwapCached: 0 kB' 'Active: 1765004 kB' 'Inactive: 142704 kB' 'Active(anon): 1521712 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1731652 kB' 'Mapped: 102932 kB' 'AnonPages: 179196 kB' 'Shmem: 1345656 kB' 'KernelStack: 14008 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 512032 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 352496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.353 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.353 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.616 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.617 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47912488 kB' 'MemUsed: 12767316 kB' 'SwapCached: 0 kB' 'Active: 5567664 kB' 'Inactive: 4337360 kB' 'Active(anon): 5415596 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337360 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9540792 kB' 'Mapped: 116044 kB' 'AnonPages: 364264 kB' 'Shmem: 5051364 kB' 'KernelStack: 13304 kB' 'PageTables: 5288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 231380 kB' 'Slab: 646068 kB' 'SReclaimable: 231380 kB' 'SUnreclaim: 414688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.618 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.619 node0=512 expecting 512 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.619 node1=512 expecting 512 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.619 00:03:20.619 real 0m3.935s 00:03:20.619 user 0m1.592s 00:03:20.619 sys 0m2.402s 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:20.619 12:07:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.619 ************************************ 00:03:20.619 END TEST per_node_1G_alloc 00:03:20.619 ************************************ 00:03:20.619 12:07:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:20.619 12:07:26 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:20.619 12:07:26 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:20.619 12:07:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.619 ************************************ 00:03:20.619 START TEST even_2G_alloc 00:03:20.619 ************************************ 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.619 12:07:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.831 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.831 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108493292 kB' 'MemAvailable: 112739844 kB' 'Buffers: 3736 kB' 'Cached: 11268856 kB' 'SwapCached: 0 kB' 'Active: 7333264 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937904 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543628 kB' 'Mapped: 219104 kB' 'Shmem: 6397168 kB' 'KReclaimable: 390916 kB' 'Slab: 1158232 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767316 kB' 'KernelStack: 27264 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8374560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.831 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.832 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108493552 kB' 'MemAvailable: 112740104 kB' 'Buffers: 3736 kB' 'Cached: 11268860 kB' 'SwapCached: 0 kB' 'Active: 7332960 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937600 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543784 kB' 'Mapped: 218996 kB' 'Shmem: 6397172 kB' 'KReclaimable: 390916 kB' 'Slab: 1158232 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767316 kB' 'KernelStack: 27264 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8374580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237328 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.833 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.834 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108493552 kB' 'MemAvailable: 112740104 kB' 'Buffers: 3736 kB' 'Cached: 11268876 kB' 'SwapCached: 0 kB' 'Active: 7332956 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543784 kB' 'Mapped: 218996 kB' 'Shmem: 6397188 kB' 'KReclaimable: 390916 kB' 'Slab: 1158232 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767316 kB' 'KernelStack: 27264 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8374600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237328 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.835 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.836 nr_hugepages=1024 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.836 resv_hugepages=0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.836 surplus_hugepages=0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.836 anon_hugepages=0 00:03:24.836 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108493300 kB' 'MemAvailable: 112739852 kB' 'Buffers: 3736 kB' 'Cached: 11268916 kB' 'SwapCached: 0 kB' 'Active: 7332624 kB' 'Inactive: 4480064 kB' 'Active(anon): 6937264 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543376 kB' 'Mapped: 218996 kB' 'Shmem: 6397228 kB' 'KReclaimable: 390916 kB' 'Slab: 1158232 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 767316 kB' 'KernelStack: 27248 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8374624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237328 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.837 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.838 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60592068 kB' 'MemUsed: 5066940 kB' 'SwapCached: 0 kB' 'Active: 1765212 kB' 'Inactive: 142704 kB' 'Active(anon): 1521920 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1731848 kB' 'Mapped: 102952 kB' 'AnonPages: 179208 kB' 'Shmem: 1345852 kB' 'KernelStack: 14056 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 512088 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 352552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.839 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47900756 kB' 'MemUsed: 12779048 kB' 'SwapCached: 0 kB' 'Active: 5567436 kB' 'Inactive: 4337360 kB' 'Active(anon): 5415368 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337360 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9540824 kB' 'Mapped: 116044 kB' 'AnonPages: 364192 kB' 'Shmem: 5051396 kB' 'KernelStack: 13192 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 231380 kB' 'Slab: 646144 kB' 'SReclaimable: 231380 kB' 'SUnreclaim: 414764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.840 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.841 node0=512 expecting 512 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.841 node1=512 expecting 512 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.841 00:03:24.841 real 0m3.961s 00:03:24.841 user 0m1.583s 00:03:24.841 sys 0m2.444s 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:24.841 12:07:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 ************************************ 00:03:24.841 END TEST even_2G_alloc 00:03:24.841 ************************************ 00:03:24.841 12:07:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:24.841 12:07:30 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.841 12:07:30 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.841 12:07:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 ************************************ 00:03:24.841 START TEST odd_alloc 00:03:24.841 ************************************ 00:03:24.841 12:07:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:24.841 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:24.841 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.842 12:07:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.111 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:29.111 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.111 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.112 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.112 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.112 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108482516 kB' 'MemAvailable: 112729068 kB' 'Buffers: 3736 kB' 'Cached: 11269036 kB' 'SwapCached: 0 kB' 'Active: 7335260 kB' 'Inactive: 4480064 kB' 'Active(anon): 6939900 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545748 kB' 'Mapped: 219028 kB' 'Shmem: 6397348 kB' 'KReclaimable: 390916 kB' 'Slab: 1157624 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766708 kB' 'KernelStack: 27216 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8375544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237280 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.112 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108483372 kB' 'MemAvailable: 112729924 kB' 'Buffers: 3736 kB' 'Cached: 11269040 kB' 'SwapCached: 0 kB' 'Active: 7334860 kB' 'Inactive: 4480064 kB' 'Active(anon): 6939500 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545404 kB' 'Mapped: 219028 kB' 'Shmem: 6397352 kB' 'KReclaimable: 390916 kB' 'Slab: 1157708 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766792 kB' 'KernelStack: 27200 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8375560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237264 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.113 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.114 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108483400 kB' 'MemAvailable: 112729952 kB' 'Buffers: 3736 kB' 'Cached: 11269056 kB' 'SwapCached: 0 kB' 'Active: 7334852 kB' 'Inactive: 4480064 kB' 'Active(anon): 6939492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545400 kB' 'Mapped: 219028 kB' 'Shmem: 6397368 kB' 'KReclaimable: 390916 kB' 'Slab: 1157708 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766792 kB' 'KernelStack: 27200 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8375584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237264 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.115 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.116 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:29.117 nr_hugepages=1025 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.117 resv_hugepages=0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.117 surplus_hugepages=0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.117 anon_hugepages=0 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108483400 kB' 'MemAvailable: 112729952 kB' 'Buffers: 3736 kB' 'Cached: 11269072 kB' 'SwapCached: 0 kB' 'Active: 7334752 kB' 'Inactive: 4480064 kB' 'Active(anon): 6939392 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545232 kB' 'Mapped: 219028 kB' 'Shmem: 6397384 kB' 'KReclaimable: 390916 kB' 'Slab: 1157708 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766792 kB' 'KernelStack: 27184 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8375604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237264 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.117 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.118 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60595708 kB' 'MemUsed: 5063300 kB' 'SwapCached: 0 kB' 'Active: 1764320 kB' 'Inactive: 142704 kB' 'Active(anon): 1521028 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1731956 kB' 'Mapped: 102984 kB' 'AnonPages: 178312 kB' 'Shmem: 1345960 kB' 'KernelStack: 14024 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 511816 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 352280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.119 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47888128 kB' 'MemUsed: 12791676 kB' 'SwapCached: 0 kB' 'Active: 5571044 kB' 'Inactive: 4337360 kB' 'Active(anon): 5418976 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337360 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9540876 kB' 'Mapped: 116040 kB' 'AnonPages: 367584 kB' 'Shmem: 5051448 kB' 'KernelStack: 13160 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 231380 kB' 'Slab: 645892 kB' 'SReclaimable: 231380 kB' 'SUnreclaim: 414512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.120 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.121 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:29.122 node0=512 expecting 513 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:29.122 node1=513 expecting 512 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:29.122 00:03:29.122 real 0m4.031s 00:03:29.122 user 0m1.622s 00:03:29.122 sys 0m2.475s 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.122 12:07:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 ************************************ 00:03:29.122 END TEST odd_alloc 00:03:29.122 ************************************ 00:03:29.122 12:07:34 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:29.122 12:07:34 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.122 12:07:34 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.122 12:07:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 ************************************ 00:03:29.122 START TEST custom_alloc 00:03:29.122 ************************************ 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.122 12:07:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.426 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:32.426 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107470460 kB' 'MemAvailable: 111717012 kB' 'Buffers: 3736 kB' 'Cached: 11269204 kB' 'SwapCached: 0 kB' 'Active: 7336556 kB' 'Inactive: 4480064 kB' 'Active(anon): 6941196 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546808 kB' 'Mapped: 219160 kB' 'Shmem: 6397516 kB' 'KReclaimable: 390916 kB' 'Slab: 1157092 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766176 kB' 'KernelStack: 27392 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8413704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237424 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.426 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107472164 kB' 'MemAvailable: 111718716 kB' 'Buffers: 3736 kB' 'Cached: 11269208 kB' 'SwapCached: 0 kB' 'Active: 7335660 kB' 'Inactive: 4480064 kB' 'Active(anon): 6940300 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545884 kB' 'Mapped: 219092 kB' 'Shmem: 6397520 kB' 'KReclaimable: 390916 kB' 'Slab: 1157080 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766164 kB' 'KernelStack: 27184 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8378988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237360 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.427 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.428 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.429 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107471024 kB' 'MemAvailable: 111717576 kB' 'Buffers: 3736 kB' 'Cached: 11269224 kB' 'SwapCached: 0 kB' 'Active: 7336468 kB' 'Inactive: 4480064 kB' 'Active(anon): 6941108 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546732 kB' 'Mapped: 218996 kB' 'Shmem: 6397536 kB' 'KReclaimable: 390916 kB' 'Slab: 1157084 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766168 kB' 'KernelStack: 27328 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8379012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237440 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.695 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.696 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:32.697 nr_hugepages=1536 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.697 resv_hugepages=0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.697 surplus_hugepages=0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.697 anon_hugepages=0 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107471572 kB' 'MemAvailable: 111718124 kB' 'Buffers: 3736 kB' 'Cached: 11269244 kB' 'SwapCached: 0 kB' 'Active: 7336112 kB' 'Inactive: 4480064 kB' 'Active(anon): 6940752 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546368 kB' 'Mapped: 218996 kB' 'Shmem: 6397556 kB' 'KReclaimable: 390916 kB' 'Slab: 1157084 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 766168 kB' 'KernelStack: 27360 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8379036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237456 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.697 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.698 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60614288 kB' 'MemUsed: 5044720 kB' 'SwapCached: 0 kB' 'Active: 1764640 kB' 'Inactive: 142704 kB' 'Active(anon): 1521348 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732080 kB' 'Mapped: 103012 kB' 'AnonPages: 178328 kB' 'Shmem: 1346084 kB' 'KernelStack: 14120 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 511464 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 351928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.699 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.700 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 46856388 kB' 'MemUsed: 13823416 kB' 'SwapCached: 0 kB' 'Active: 5571808 kB' 'Inactive: 4337360 kB' 'Active(anon): 5419740 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337360 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9540924 kB' 'Mapped: 115984 kB' 'AnonPages: 368316 kB' 'Shmem: 5051496 kB' 'KernelStack: 13192 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 231380 kB' 'Slab: 645620 kB' 'SReclaimable: 231380 kB' 'SUnreclaim: 414240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.701 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.702 node0=512 expecting 512 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:32.702 node1=1024 expecting 1024 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:32.702 00:03:32.702 real 0m3.939s 00:03:32.702 user 0m1.543s 00:03:32.702 sys 0m2.457s 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:32.702 12:07:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.702 ************************************ 00:03:32.702 END TEST custom_alloc 00:03:32.702 ************************************ 00:03:32.702 12:07:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:32.702 12:07:38 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.702 12:07:38 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.702 12:07:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.702 ************************************ 00:03:32.702 START TEST no_shrink_alloc 00:03:32.702 ************************************ 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.702 12:07:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.914 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:36.914 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108502752 kB' 'MemAvailable: 112749304 kB' 'Buffers: 3736 kB' 'Cached: 11269400 kB' 'SwapCached: 0 kB' 'Active: 7338932 kB' 'Inactive: 4480064 kB' 'Active(anon): 6943572 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548792 kB' 'Mapped: 219084 kB' 'Shmem: 6397712 kB' 'KReclaimable: 390916 kB' 'Slab: 1156256 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 765340 kB' 'KernelStack: 27248 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8377728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.914 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.915 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108505052 kB' 'MemAvailable: 112751604 kB' 'Buffers: 3736 kB' 'Cached: 11269404 kB' 'SwapCached: 0 kB' 'Active: 7338008 kB' 'Inactive: 4480064 kB' 'Active(anon): 6942648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548420 kB' 'Mapped: 219072 kB' 'Shmem: 6397716 kB' 'KReclaimable: 390916 kB' 'Slab: 1156212 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 765296 kB' 'KernelStack: 27232 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8377744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.916 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.917 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108505052 kB' 'MemAvailable: 112751604 kB' 'Buffers: 3736 kB' 'Cached: 11269404 kB' 'SwapCached: 0 kB' 'Active: 7338008 kB' 'Inactive: 4480064 kB' 'Active(anon): 6942648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548420 kB' 'Mapped: 219072 kB' 'Shmem: 6397716 kB' 'KReclaimable: 390916 kB' 'Slab: 1156212 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 765296 kB' 'KernelStack: 27232 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8377768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.918 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.919 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.920 nr_hugepages=1024 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.920 resv_hugepages=0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.920 surplus_hugepages=0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.920 anon_hugepages=0 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108504612 kB' 'MemAvailable: 112751164 kB' 'Buffers: 3736 kB' 'Cached: 11269408 kB' 'SwapCached: 0 kB' 'Active: 7338172 kB' 'Inactive: 4480064 kB' 'Active(anon): 6942812 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548580 kB' 'Mapped: 219072 kB' 'Shmem: 6397720 kB' 'KReclaimable: 390916 kB' 'Slab: 1156212 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 765296 kB' 'KernelStack: 27216 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8377788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.920 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.921 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59563792 kB' 'MemUsed: 6095216 kB' 'SwapCached: 0 kB' 'Active: 1764340 kB' 'Inactive: 142704 kB' 'Active(anon): 1521048 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732156 kB' 'Mapped: 103032 kB' 'AnonPages: 178068 kB' 'Shmem: 1346160 kB' 'KernelStack: 14008 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 510928 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 351392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.922 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.923 node0=1024 expecting 1024 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.923 12:07:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.228 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:40.228 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.228 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108518748 kB' 'MemAvailable: 112765300 kB' 'Buffers: 3736 kB' 'Cached: 11269560 kB' 'SwapCached: 0 kB' 'Active: 7339880 kB' 'Inactive: 4480064 kB' 'Active(anon): 6944520 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550032 kB' 'Mapped: 219628 kB' 'Shmem: 6397872 kB' 'KReclaimable: 390916 kB' 'Slab: 1154932 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 764016 kB' 'KernelStack: 27216 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8379960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237248 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.494 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.495 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108515580 kB' 'MemAvailable: 112762132 kB' 'Buffers: 3736 kB' 'Cached: 11269564 kB' 'SwapCached: 0 kB' 'Active: 7343004 kB' 'Inactive: 4480064 kB' 'Active(anon): 6947644 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553176 kB' 'Mapped: 219616 kB' 'Shmem: 6397876 kB' 'KReclaimable: 390916 kB' 'Slab: 1154932 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 764016 kB' 'KernelStack: 27216 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8383540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237232 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.496 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108511640 kB' 'MemAvailable: 112758192 kB' 'Buffers: 3736 kB' 'Cached: 11269580 kB' 'SwapCached: 0 kB' 'Active: 7338944 kB' 'Inactive: 4480064 kB' 'Active(anon): 6943584 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549140 kB' 'Mapped: 219476 kB' 'Shmem: 6397892 kB' 'KReclaimable: 390916 kB' 'Slab: 1154992 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 764076 kB' 'KernelStack: 27200 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8378772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237232 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.497 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.498 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.499 nr_hugepages=1024 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.499 resv_hugepages=0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.499 surplus_hugepages=0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.499 anon_hugepages=0 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.499 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 108511892 kB' 'MemAvailable: 112758444 kB' 'Buffers: 3736 kB' 'Cached: 11269624 kB' 'SwapCached: 0 kB' 'Active: 7338576 kB' 'Inactive: 4480064 kB' 'Active(anon): 6943216 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4480064 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548660 kB' 'Mapped: 219112 kB' 'Shmem: 6397936 kB' 'KReclaimable: 390916 kB' 'Slab: 1154992 kB' 'SReclaimable: 390916 kB' 'SUnreclaim: 764076 kB' 'KernelStack: 27184 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8378796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237216 kB' 'VmallocChunk: 0 kB' 'Percpu: 113472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3290484 kB' 'DirectMap2M: 17360896 kB' 'DirectMap1G: 115343360 kB' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.500 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59572952 kB' 'MemUsed: 6086056 kB' 'SwapCached: 0 kB' 'Active: 1764392 kB' 'Inactive: 142704 kB' 'Active(anon): 1521100 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 142704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732180 kB' 'Mapped: 103048 kB' 'AnonPages: 178076 kB' 'Shmem: 1346184 kB' 'KernelStack: 13976 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159536 kB' 'Slab: 510488 kB' 'SReclaimable: 159536 kB' 'SUnreclaim: 350952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.501 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.502 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.503 node0=1024 expecting 1024 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.503 00:03:40.503 real 0m7.780s 00:03:40.503 user 0m2.926s 00:03:40.503 sys 0m4.979s 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:40.503 12:07:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.503 ************************************ 00:03:40.503 END TEST no_shrink_alloc 00:03:40.503 ************************************ 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.503 12:07:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.503 00:03:40.503 real 0m28.297s 00:03:40.503 user 0m11.089s 00:03:40.503 sys 0m17.619s 00:03:40.503 12:07:46 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:40.503 12:07:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.503 ************************************ 00:03:40.503 END TEST hugepages 00:03:40.503 ************************************ 00:03:40.764 12:07:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.765 12:07:46 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:40.765 12:07:46 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:40.765 12:07:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.765 ************************************ 00:03:40.765 START TEST driver 00:03:40.765 ************************************ 00:03:40.765 12:07:46 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.765 * Looking for test storage... 00:03:40.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.765 12:07:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:40.765 12:07:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.765 12:07:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.054 12:07:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:46.054 12:07:51 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.054 12:07:51 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.054 12:07:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.054 ************************************ 00:03:46.054 START TEST guess_driver 00:03:46.054 ************************************ 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:46.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:46.054 Looking for driver=vfio-pci 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.054 12:07:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.272 12:07:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.631 00:03:55.631 real 0m9.038s 00:03:55.631 user 0m2.979s 00:03:55.631 sys 0m5.306s 00:03:55.631 12:08:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:55.631 12:08:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.631 ************************************ 00:03:55.631 END TEST guess_driver 00:03:55.631 ************************************ 00:03:55.631 00:03:55.631 real 0m14.291s 00:03:55.631 user 0m4.537s 00:03:55.631 sys 0m8.281s 00:03:55.631 12:08:00 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:55.631 12:08:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.631 ************************************ 00:03:55.631 END TEST driver 00:03:55.631 ************************************ 00:03:55.631 12:08:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.631 12:08:00 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:55.631 12:08:00 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:55.631 12:08:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.631 ************************************ 00:03:55.631 START TEST devices 00:03:55.631 ************************************ 00:03:55.631 12:08:00 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.632 * Looking for test storage... 00:03:55.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.632 12:08:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:55.632 12:08:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:55.632 12:08:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.632 12:08:00 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.838 12:08:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:59.839 12:08:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:59.839 No valid GPT data, bailing 00:03:59.839 12:08:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:59.839 12:08:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:59.839 12:08:04 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:59.839 12:08:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.839 ************************************ 00:03:59.839 START TEST nvme_mount 00:03:59.839 ************************************ 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.839 12:08:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:00.099 Creating new GPT entries in memory. 00:04:00.099 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.099 other utilities. 00:04:00.099 12:08:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.099 12:08:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.099 12:08:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.100 12:08:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.100 12:08:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.483 Creating new GPT entries in memory. 00:04:01.483 The operation has completed successfully. 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 399604 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.483 12:08:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.783 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.044 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.044 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.304 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:05.304 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:05.304 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.304 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.304 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.305 12:08:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.506 12:08:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.805 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.805 00:04:12.805 real 0m13.754s 00:04:12.805 user 0m4.291s 00:04:12.805 sys 0m7.363s 00:04:12.805 12:08:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.066 12:08:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.066 ************************************ 00:04:13.066 END TEST nvme_mount 00:04:13.066 ************************************ 00:04:13.066 12:08:18 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.066 12:08:18 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.066 12:08:18 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.066 12:08:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.066 ************************************ 00:04:13.066 START TEST dm_mount 00:04:13.066 ************************************ 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.066 12:08:18 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.007 Creating new GPT entries in memory. 00:04:14.007 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.007 other utilities. 00:04:14.007 12:08:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.007 12:08:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.007 12:08:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.007 12:08:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.007 12:08:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.949 Creating new GPT entries in memory. 00:04:14.949 The operation has completed successfully. 00:04:14.949 12:08:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.949 12:08:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.949 12:08:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.949 12:08:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.949 12:08:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:16.336 The operation has completed successfully. 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 405156 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.336 12:08:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.672 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.933 12:08:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.142 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.143 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.143 00:04:24.143 real 0m10.770s 00:04:24.143 user 0m2.916s 00:04:24.143 sys 0m4.924s 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.143 12:08:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.143 ************************************ 00:04:24.143 END TEST dm_mount 00:04:24.143 ************************************ 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.143 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.143 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.143 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.143 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.143 12:08:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.143 00:04:24.143 real 0m29.062s 00:04:24.143 user 0m8.792s 00:04:24.143 sys 0m15.072s 00:04:24.143 12:08:29 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.143 12:08:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.143 ************************************ 00:04:24.143 END TEST devices 00:04:24.143 ************************************ 00:04:24.143 00:04:24.143 real 1m38.827s 00:04:24.143 user 0m33.530s 00:04:24.143 sys 0m56.948s 00:04:24.143 12:08:29 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.143 12:08:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.143 ************************************ 00:04:24.143 END TEST setup.sh 00:04:24.143 ************************************ 00:04:24.143 12:08:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.352 Hugepages 00:04:28.352 node hugesize free / total 00:04:28.352 node0 1048576kB 0 / 0 00:04:28.352 node0 2048kB 2048 / 2048 00:04:28.352 node1 1048576kB 0 / 0 00:04:28.352 node1 2048kB 0 / 0 00:04:28.352 00:04:28.352 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.352 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:28.352 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:28.352 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:28.352 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:28.352 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:28.352 12:08:33 -- spdk/autotest.sh@130 -- # uname -s 00:04:28.352 12:08:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:28.352 12:08:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:28.352 12:08:33 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.562 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:32.562 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:32.563 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:33.946 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:33.946 12:08:39 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:34.888 12:08:40 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:34.888 12:08:40 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:34.888 12:08:40 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.888 12:08:40 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:34.888 12:08:40 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:34.888 12:08:40 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:34.888 12:08:40 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.888 12:08:40 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.888 12:08:40 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:34.888 12:08:40 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:34.888 12:08:40 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:34.888 12:08:40 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.090 Waiting for block devices as requested 00:04:39.090 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:39.090 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:39.350 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:39.350 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:39.350 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:39.611 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:39.611 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:39.611 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:39.871 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:39.871 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:39.871 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:39.871 12:08:45 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:39.871 12:08:45 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:39.871 12:08:45 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.871 12:08:45 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:04:39.871 12:08:45 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:39.871 12:08:45 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:39.871 12:08:45 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:39.871 12:08:45 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:39.871 12:08:45 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:39.872 12:08:45 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:39.872 12:08:45 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:39.872 12:08:45 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:39.872 12:08:45 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:39.872 12:08:45 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:04:39.872 12:08:45 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:39.872 12:08:45 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:39.872 12:08:45 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:39.872 12:08:45 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:39.872 12:08:45 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:39.872 12:08:45 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:39.872 12:08:45 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:39.872 12:08:45 -- common/autotest_common.sh@1556 -- # continue 00:04:39.872 12:08:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.872 12:08:45 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:39.872 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.872 12:08:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.872 12:08:45 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:39.872 12:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.872 12:08:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.140 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.140 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:44.140 12:08:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:44.140 12:08:49 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:44.140 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.140 12:08:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:44.140 12:08:49 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:44.140 12:08:49 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:44.141 12:08:49 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:44.141 12:08:49 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:44.141 12:08:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:44.141 12:08:49 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:44.141 12:08:49 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:44.141 12:08:49 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.141 12:08:49 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.141 12:08:49 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:44.141 12:08:49 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:44.141 12:08:49 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:44.141 12:08:49 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:44.141 12:08:49 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:44.141 12:08:49 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:04:44.141 12:08:49 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:44.141 12:08:49 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:44.141 12:08:49 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:44.141 12:08:49 -- common/autotest_common.sh@1592 -- # return 0 00:04:44.141 12:08:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:44.141 12:08:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:44.141 12:08:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.141 12:08:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.141 12:08:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:44.141 12:08:49 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:44.141 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.141 12:08:49 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:44.141 12:08:49 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.141 12:08:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.141 12:08:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.141 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.141 ************************************ 00:04:44.141 START TEST env 00:04:44.141 ************************************ 00:04:44.141 12:08:49 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.141 * Looking for test storage... 00:04:44.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:44.141 12:08:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.141 12:08:49 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.141 12:08:49 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.141 12:08:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.141 ************************************ 00:04:44.141 START TEST env_memory 00:04:44.141 ************************************ 00:04:44.141 12:08:49 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.141 00:04:44.141 00:04:44.141 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.141 http://cunit.sourceforge.net/ 00:04:44.141 00:04:44.141 00:04:44.141 Suite: memory 00:04:44.401 Test: alloc and free memory map ...[2024-06-10 12:08:49.776925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.401 passed 00:04:44.401 Test: mem map translation ...[2024-06-10 12:08:49.802191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.401 [2024-06-10 12:08:49.802216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.401 [2024-06-10 12:08:49.802262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.401 [2024-06-10 12:08:49.802268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.401 passed 00:04:44.401 Test: mem map registration ...[2024-06-10 12:08:49.857358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:44.401 [2024-06-10 12:08:49.857374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:44.401 passed 00:04:44.401 Test: mem map adjacent registrations ...passed 00:04:44.401 00:04:44.401 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.401 suites 1 1 n/a 0 0 00:04:44.401 tests 4 4 4 0 0 00:04:44.401 asserts 152 152 152 0 n/a 00:04:44.401 00:04:44.401 Elapsed time = 0.192 seconds 00:04:44.401 00:04:44.401 real 0m0.205s 00:04:44.401 user 0m0.196s 00:04:44.401 sys 0m0.008s 00:04:44.401 12:08:49 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:44.401 12:08:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.401 ************************************ 00:04:44.401 END TEST env_memory 00:04:44.401 ************************************ 00:04:44.401 12:08:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.401 12:08:49 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:44.401 12:08:49 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:44.401 12:08:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.662 ************************************ 00:04:44.662 START TEST env_vtophys 00:04:44.662 ************************************ 00:04:44.662 12:08:50 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.662 EAL: lib.eal log level changed from notice to debug 00:04:44.662 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.662 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.662 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.662 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.662 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.662 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.662 EAL: Detected lcore 6 as core 6 on socket 0 00:04:44.662 EAL: Detected lcore 7 as core 7 on socket 0 00:04:44.662 EAL: Detected lcore 8 as core 8 on socket 0 00:04:44.662 EAL: Detected lcore 9 as core 9 on socket 0 00:04:44.662 EAL: Detected lcore 10 as core 10 on socket 0 00:04:44.662 EAL: Detected lcore 11 as core 11 on socket 0 00:04:44.662 EAL: Detected lcore 12 as core 12 on socket 0 00:04:44.662 EAL: Detected lcore 13 as core 13 on socket 0 00:04:44.662 EAL: Detected lcore 14 as core 14 on socket 0 00:04:44.662 EAL: Detected lcore 15 as core 15 on socket 0 00:04:44.662 EAL: Detected lcore 16 as core 16 on socket 0 00:04:44.662 EAL: Detected lcore 17 as core 17 on socket 0 00:04:44.662 EAL: Detected lcore 18 as core 18 on socket 0 00:04:44.662 EAL: Detected lcore 19 as core 19 on socket 0 00:04:44.662 EAL: Detected lcore 20 as core 20 on socket 0 00:04:44.662 EAL: Detected lcore 21 as core 21 on socket 0 00:04:44.662 EAL: Detected lcore 22 as core 22 on socket 0 00:04:44.662 EAL: Detected lcore 23 as core 23 on socket 0 00:04:44.662 EAL: Detected lcore 24 as core 24 on socket 0 00:04:44.662 EAL: Detected lcore 25 as core 25 on socket 0 00:04:44.662 EAL: Detected lcore 26 as core 26 on socket 0 00:04:44.662 EAL: Detected lcore 27 as core 27 on socket 0 00:04:44.662 EAL: Detected lcore 28 as core 28 on socket 0 00:04:44.662 EAL: Detected lcore 29 as core 29 on socket 0 00:04:44.662 EAL: Detected lcore 30 as core 30 on socket 0 00:04:44.662 EAL: Detected lcore 31 as core 31 on socket 0 00:04:44.662 EAL: Detected lcore 32 as core 32 on socket 0 00:04:44.662 EAL: Detected lcore 33 as core 33 on socket 0 00:04:44.662 EAL: Detected lcore 34 as core 34 on socket 0 00:04:44.662 EAL: Detected lcore 35 as core 35 on socket 0 00:04:44.662 EAL: Detected lcore 36 as core 0 on socket 1 00:04:44.662 EAL: Detected lcore 37 as core 1 on socket 1 00:04:44.662 EAL: Detected lcore 38 as core 2 on socket 1 00:04:44.662 EAL: Detected lcore 39 as core 3 on socket 1 00:04:44.662 EAL: Detected lcore 40 as core 4 on socket 1 00:04:44.662 EAL: Detected lcore 41 as core 5 on socket 1 00:04:44.662 EAL: Detected lcore 42 as core 6 on socket 1 00:04:44.662 EAL: Detected lcore 43 as core 7 on socket 1 00:04:44.662 EAL: Detected lcore 44 as core 8 on socket 1 00:04:44.662 EAL: Detected lcore 45 as core 9 on socket 1 00:04:44.662 EAL: Detected lcore 46 as core 10 on socket 1 00:04:44.662 EAL: Detected lcore 47 as core 11 on socket 1 00:04:44.662 EAL: Detected lcore 48 as core 12 on socket 1 00:04:44.662 EAL: Detected lcore 49 as core 13 on socket 1 00:04:44.662 EAL: Detected lcore 50 as core 14 on socket 1 00:04:44.662 EAL: Detected lcore 51 as core 15 on socket 1 00:04:44.662 EAL: Detected lcore 52 as core 16 on socket 1 00:04:44.662 EAL: Detected lcore 53 as core 17 on socket 1 00:04:44.662 EAL: Detected lcore 54 as core 18 on socket 1 00:04:44.662 EAL: Detected lcore 55 as core 19 on socket 1 00:04:44.662 EAL: Detected lcore 56 as core 20 on socket 1 00:04:44.662 EAL: Detected lcore 57 as core 21 on socket 1 00:04:44.662 EAL: Detected lcore 58 as core 22 on socket 1 00:04:44.662 EAL: Detected lcore 59 as core 23 on socket 1 00:04:44.662 EAL: Detected lcore 60 as core 24 on socket 1 00:04:44.662 EAL: Detected lcore 61 as core 25 on socket 1 00:04:44.662 EAL: Detected lcore 62 as core 26 on socket 1 00:04:44.662 EAL: Detected lcore 63 as core 27 on socket 1 00:04:44.662 EAL: Detected lcore 64 as core 28 on socket 1 00:04:44.662 EAL: Detected lcore 65 as core 29 on socket 1 00:04:44.662 EAL: Detected lcore 66 as core 30 on socket 1 00:04:44.662 EAL: Detected lcore 67 as core 31 on socket 1 00:04:44.662 EAL: Detected lcore 68 as core 32 on socket 1 00:04:44.662 EAL: Detected lcore 69 as core 33 on socket 1 00:04:44.662 EAL: Detected lcore 70 as core 34 on socket 1 00:04:44.662 EAL: Detected lcore 71 as core 35 on socket 1 00:04:44.662 EAL: Detected lcore 72 as core 0 on socket 0 00:04:44.662 EAL: Detected lcore 73 as core 1 on socket 0 00:04:44.662 EAL: Detected lcore 74 as core 2 on socket 0 00:04:44.662 EAL: Detected lcore 75 as core 3 on socket 0 00:04:44.662 EAL: Detected lcore 76 as core 4 on socket 0 00:04:44.662 EAL: Detected lcore 77 as core 5 on socket 0 00:04:44.662 EAL: Detected lcore 78 as core 6 on socket 0 00:04:44.662 EAL: Detected lcore 79 as core 7 on socket 0 00:04:44.662 EAL: Detected lcore 80 as core 8 on socket 0 00:04:44.662 EAL: Detected lcore 81 as core 9 on socket 0 00:04:44.662 EAL: Detected lcore 82 as core 10 on socket 0 00:04:44.662 EAL: Detected lcore 83 as core 11 on socket 0 00:04:44.662 EAL: Detected lcore 84 as core 12 on socket 0 00:04:44.662 EAL: Detected lcore 85 as core 13 on socket 0 00:04:44.662 EAL: Detected lcore 86 as core 14 on socket 0 00:04:44.662 EAL: Detected lcore 87 as core 15 on socket 0 00:04:44.662 EAL: Detected lcore 88 as core 16 on socket 0 00:04:44.662 EAL: Detected lcore 89 as core 17 on socket 0 00:04:44.662 EAL: Detected lcore 90 as core 18 on socket 0 00:04:44.662 EAL: Detected lcore 91 as core 19 on socket 0 00:04:44.662 EAL: Detected lcore 92 as core 20 on socket 0 00:04:44.662 EAL: Detected lcore 93 as core 21 on socket 0 00:04:44.662 EAL: Detected lcore 94 as core 22 on socket 0 00:04:44.663 EAL: Detected lcore 95 as core 23 on socket 0 00:04:44.663 EAL: Detected lcore 96 as core 24 on socket 0 00:04:44.663 EAL: Detected lcore 97 as core 25 on socket 0 00:04:44.663 EAL: Detected lcore 98 as core 26 on socket 0 00:04:44.663 EAL: Detected lcore 99 as core 27 on socket 0 00:04:44.663 EAL: Detected lcore 100 as core 28 on socket 0 00:04:44.663 EAL: Detected lcore 101 as core 29 on socket 0 00:04:44.663 EAL: Detected lcore 102 as core 30 on socket 0 00:04:44.663 EAL: Detected lcore 103 as core 31 on socket 0 00:04:44.663 EAL: Detected lcore 104 as core 32 on socket 0 00:04:44.663 EAL: Detected lcore 105 as core 33 on socket 0 00:04:44.663 EAL: Detected lcore 106 as core 34 on socket 0 00:04:44.663 EAL: Detected lcore 107 as core 35 on socket 0 00:04:44.663 EAL: Detected lcore 108 as core 0 on socket 1 00:04:44.663 EAL: Detected lcore 109 as core 1 on socket 1 00:04:44.663 EAL: Detected lcore 110 as core 2 on socket 1 00:04:44.663 EAL: Detected lcore 111 as core 3 on socket 1 00:04:44.663 EAL: Detected lcore 112 as core 4 on socket 1 00:04:44.663 EAL: Detected lcore 113 as core 5 on socket 1 00:04:44.663 EAL: Detected lcore 114 as core 6 on socket 1 00:04:44.663 EAL: Detected lcore 115 as core 7 on socket 1 00:04:44.663 EAL: Detected lcore 116 as core 8 on socket 1 00:04:44.663 EAL: Detected lcore 117 as core 9 on socket 1 00:04:44.663 EAL: Detected lcore 118 as core 10 on socket 1 00:04:44.663 EAL: Detected lcore 119 as core 11 on socket 1 00:04:44.663 EAL: Detected lcore 120 as core 12 on socket 1 00:04:44.663 EAL: Detected lcore 121 as core 13 on socket 1 00:04:44.663 EAL: Detected lcore 122 as core 14 on socket 1 00:04:44.663 EAL: Detected lcore 123 as core 15 on socket 1 00:04:44.663 EAL: Detected lcore 124 as core 16 on socket 1 00:04:44.663 EAL: Detected lcore 125 as core 17 on socket 1 00:04:44.663 EAL: Detected lcore 126 as core 18 on socket 1 00:04:44.663 EAL: Detected lcore 127 as core 19 on socket 1 00:04:44.663 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:44.663 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:44.663 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:44.663 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:44.663 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:44.663 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:44.663 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:44.663 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:44.663 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:44.663 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:44.663 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:44.663 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:44.663 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:44.663 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:44.663 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:44.663 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:44.663 EAL: Maximum logical cores by configuration: 128 00:04:44.663 EAL: Detected CPU lcores: 128 00:04:44.663 EAL: Detected NUMA nodes: 2 00:04:44.663 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.663 EAL: Detected shared linkage of DPDK 00:04:44.663 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.663 EAL: Bus pci wants IOVA as 'DC' 00:04:44.663 EAL: Buses did not request a specific IOVA mode. 00:04:44.663 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.663 EAL: Selected IOVA mode 'VA' 00:04:44.663 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.663 EAL: Probing VFIO support... 00:04:44.663 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.663 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.663 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.663 EAL: VFIO support initialized 00:04:44.663 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.663 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.663 EAL: Setting up physically contiguous memory... 00:04:44.663 EAL: Setting maximum number of open files to 524288 00:04:44.663 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.663 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.663 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.663 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.663 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.663 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.663 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.663 EAL: Hugepages will be freed exactly as allocated. 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: TSC frequency is ~2400000 KHz 00:04:44.663 EAL: Main lcore 0 is ready (tid=7f087f8aca00;cpuset=[0]) 00:04:44.663 EAL: Trying to obtain current memory policy. 00:04:44.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.663 EAL: Restoring previous memory policy: 0 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.663 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.663 00:04:44.663 00:04:44.663 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.663 http://cunit.sourceforge.net/ 00:04:44.663 00:04:44.663 00:04:44.663 Suite: components_suite 00:04:44.663 Test: vtophys_malloc_test ...passed 00:04:44.663 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.663 EAL: Restoring previous memory policy: 4 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.663 EAL: Trying to obtain current memory policy. 00:04:44.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.663 EAL: Restoring previous memory policy: 4 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.663 EAL: Trying to obtain current memory policy. 00:04:44.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.663 EAL: Restoring previous memory policy: 4 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.663 EAL: request: mp_malloc_sync 00:04:44.663 EAL: No shared files mode enabled, IPC is disabled 00:04:44.663 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.664 EAL: Trying to obtain current memory policy. 00:04:44.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.664 EAL: Restoring previous memory policy: 4 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.664 EAL: Trying to obtain current memory policy. 00:04:44.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.664 EAL: Restoring previous memory policy: 4 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.664 EAL: Trying to obtain current memory policy. 00:04:44.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.664 EAL: Restoring previous memory policy: 4 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.664 EAL: Trying to obtain current memory policy. 00:04:44.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.664 EAL: Restoring previous memory policy: 4 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.664 EAL: Trying to obtain current memory policy. 00:04:44.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.664 EAL: Restoring previous memory policy: 4 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.664 EAL: request: mp_malloc_sync 00:04:44.664 EAL: No shared files mode enabled, IPC is disabled 00:04:44.664 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.924 EAL: request: mp_malloc_sync 00:04:44.924 EAL: No shared files mode enabled, IPC is disabled 00:04:44.924 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.924 EAL: Trying to obtain current memory policy. 00:04:44.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.924 EAL: Restoring previous memory policy: 4 00:04:44.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.924 EAL: request: mp_malloc_sync 00:04:44.924 EAL: No shared files mode enabled, IPC is disabled 00:04:44.924 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.924 EAL: request: mp_malloc_sync 00:04:44.924 EAL: No shared files mode enabled, IPC is disabled 00:04:44.924 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.924 EAL: Trying to obtain current memory policy. 00:04:44.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.184 EAL: Restoring previous memory policy: 4 00:04:45.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.184 EAL: request: mp_malloc_sync 00:04:45.184 EAL: No shared files mode enabled, IPC is disabled 00:04:45.184 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.445 EAL: request: mp_malloc_sync 00:04:45.445 EAL: No shared files mode enabled, IPC is disabled 00:04:45.445 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.445 passed 00:04:45.445 00:04:45.445 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.445 suites 1 1 n/a 0 0 00:04:45.445 tests 2 2 2 0 0 00:04:45.445 asserts 497 497 497 0 n/a 00:04:45.445 00:04:45.445 Elapsed time = 0.658 seconds 00:04:45.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.446 EAL: request: mp_malloc_sync 00:04:45.446 EAL: No shared files mode enabled, IPC is disabled 00:04:45.446 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.446 EAL: No shared files mode enabled, IPC is disabled 00:04:45.446 EAL: No shared files mode enabled, IPC is disabled 00:04:45.446 EAL: No shared files mode enabled, IPC is disabled 00:04:45.446 00:04:45.446 real 0m0.783s 00:04:45.446 user 0m0.416s 00:04:45.446 sys 0m0.345s 00:04:45.446 12:08:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.446 12:08:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 ************************************ 00:04:45.446 END TEST env_vtophys 00:04:45.446 ************************************ 00:04:45.446 12:08:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.446 12:08:50 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:45.446 12:08:50 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:45.446 12:08:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 ************************************ 00:04:45.446 START TEST env_pci 00:04:45.446 ************************************ 00:04:45.446 12:08:50 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.446 00:04:45.446 00:04:45.446 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.446 http://cunit.sourceforge.net/ 00:04:45.446 00:04:45.446 00:04:45.446 Suite: pci 00:04:45.446 Test: pci_hook ...[2024-06-10 12:08:50.889587] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 417274 has claimed it 00:04:45.446 EAL: Cannot find device (10000:00:01.0) 00:04:45.446 EAL: Failed to attach device on primary process 00:04:45.446 passed 00:04:45.446 00:04:45.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.446 suites 1 1 n/a 0 0 00:04:45.446 tests 1 1 1 0 0 00:04:45.446 asserts 25 25 25 0 n/a 00:04:45.446 00:04:45.446 Elapsed time = 0.032 seconds 00:04:45.446 00:04:45.446 real 0m0.053s 00:04:45.446 user 0m0.018s 00:04:45.446 sys 0m0.035s 00:04:45.446 12:08:50 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.446 12:08:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 ************************************ 00:04:45.446 END TEST env_pci 00:04:45.446 ************************************ 00:04:45.446 12:08:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.446 12:08:50 env -- env/env.sh@15 -- # uname 00:04:45.446 12:08:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.446 12:08:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.446 12:08:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.446 12:08:50 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:45.446 12:08:50 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:45.446 12:08:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 ************************************ 00:04:45.446 START TEST env_dpdk_post_init 00:04:45.446 ************************************ 00:04:45.446 12:08:51 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.446 EAL: Detected CPU lcores: 128 00:04:45.446 EAL: Detected NUMA nodes: 2 00:04:45.446 EAL: Detected shared linkage of DPDK 00:04:45.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.707 EAL: Selected IOVA mode 'VA' 00:04:45.707 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.707 EAL: VFIO support initialized 00:04:45.707 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.707 EAL: Using IOMMU type 1 (Type 1) 00:04:45.707 EAL: Ignore mapping IO port bar(1) 00:04:45.968 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:45.968 EAL: Ignore mapping IO port bar(1) 00:04:46.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:46.229 EAL: Ignore mapping IO port bar(1) 00:04:46.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:46.490 EAL: Ignore mapping IO port bar(1) 00:04:46.490 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:46.750 EAL: Ignore mapping IO port bar(1) 00:04:46.750 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:47.010 EAL: Ignore mapping IO port bar(1) 00:04:47.010 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:47.010 EAL: Ignore mapping IO port bar(1) 00:04:47.272 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:47.272 EAL: Ignore mapping IO port bar(1) 00:04:47.532 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:47.794 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:47.794 EAL: Ignore mapping IO port bar(1) 00:04:47.794 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:48.055 EAL: Ignore mapping IO port bar(1) 00:04:48.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:48.316 EAL: Ignore mapping IO port bar(1) 00:04:48.316 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:48.577 EAL: Ignore mapping IO port bar(1) 00:04:48.577 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:48.577 EAL: Ignore mapping IO port bar(1) 00:04:48.838 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:48.838 EAL: Ignore mapping IO port bar(1) 00:04:49.100 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:49.100 EAL: Ignore mapping IO port bar(1) 00:04:49.362 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:49.362 EAL: Ignore mapping IO port bar(1) 00:04:49.362 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:49.362 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:49.362 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:49.623 Starting DPDK initialization... 00:04:49.623 Starting SPDK post initialization... 00:04:49.623 SPDK NVMe probe 00:04:49.623 Attaching to 0000:65:00.0 00:04:49.623 Attached to 0000:65:00.0 00:04:49.623 Cleaning up... 00:04:51.536 00:04:51.536 real 0m5.720s 00:04:51.536 user 0m0.175s 00:04:51.536 sys 0m0.090s 00:04:51.536 12:08:56 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.536 12:08:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.536 ************************************ 00:04:51.536 END TEST env_dpdk_post_init 00:04:51.536 ************************************ 00:04:51.536 12:08:56 env -- env/env.sh@26 -- # uname 00:04:51.536 12:08:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.537 12:08:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.537 12:08:56 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.537 12:08:56 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.537 12:08:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.537 ************************************ 00:04:51.537 START TEST env_mem_callbacks 00:04:51.537 ************************************ 00:04:51.537 12:08:56 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.537 EAL: Detected CPU lcores: 128 00:04:51.537 EAL: Detected NUMA nodes: 2 00:04:51.537 EAL: Detected shared linkage of DPDK 00:04:51.537 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.537 EAL: Selected IOVA mode 'VA' 00:04:51.537 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.537 EAL: VFIO support initialized 00:04:51.537 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.537 00:04:51.537 00:04:51.537 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.537 http://cunit.sourceforge.net/ 00:04:51.537 00:04:51.537 00:04:51.537 Suite: memory 00:04:51.537 Test: test ... 00:04:51.537 register 0x200000200000 2097152 00:04:51.537 malloc 3145728 00:04:51.537 register 0x200000400000 4194304 00:04:51.537 buf 0x200000500000 len 3145728 PASSED 00:04:51.537 malloc 64 00:04:51.537 buf 0x2000004fff40 len 64 PASSED 00:04:51.537 malloc 4194304 00:04:51.537 register 0x200000800000 6291456 00:04:51.537 buf 0x200000a00000 len 4194304 PASSED 00:04:51.537 free 0x200000500000 3145728 00:04:51.537 free 0x2000004fff40 64 00:04:51.537 unregister 0x200000400000 4194304 PASSED 00:04:51.537 free 0x200000a00000 4194304 00:04:51.537 unregister 0x200000800000 6291456 PASSED 00:04:51.537 malloc 8388608 00:04:51.537 register 0x200000400000 10485760 00:04:51.537 buf 0x200000600000 len 8388608 PASSED 00:04:51.537 free 0x200000600000 8388608 00:04:51.537 unregister 0x200000400000 10485760 PASSED 00:04:51.537 passed 00:04:51.537 00:04:51.537 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.537 suites 1 1 n/a 0 0 00:04:51.537 tests 1 1 1 0 0 00:04:51.537 asserts 15 15 15 0 n/a 00:04:51.537 00:04:51.537 Elapsed time = 0.008 seconds 00:04:51.537 00:04:51.537 real 0m0.068s 00:04:51.537 user 0m0.023s 00:04:51.537 sys 0m0.045s 00:04:51.537 12:08:56 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.537 12:08:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.537 ************************************ 00:04:51.537 END TEST env_mem_callbacks 00:04:51.537 ************************************ 00:04:51.537 00:04:51.537 real 0m7.332s 00:04:51.537 user 0m1.033s 00:04:51.537 sys 0m0.851s 00:04:51.537 12:08:56 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.537 12:08:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.537 ************************************ 00:04:51.537 END TEST env 00:04:51.537 ************************************ 00:04:51.537 12:08:56 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.537 12:08:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.537 12:08:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.537 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:04:51.537 ************************************ 00:04:51.537 START TEST rpc 00:04:51.537 ************************************ 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.537 * Looking for test storage... 00:04:51.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.537 12:08:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=418715 00:04:51.537 12:08:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.537 12:08:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:51.537 12:08:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 418715 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@830 -- # '[' -z 418715 ']' 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:51.537 12:08:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.798 [2024-06-10 12:08:57.150896] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:51.798 [2024-06-10 12:08:57.150947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418715 ] 00:04:51.798 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.798 [2024-06-10 12:08:57.222424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.798 [2024-06-10 12:08:57.293715] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.798 [2024-06-10 12:08:57.293755] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 418715' to capture a snapshot of events at runtime. 00:04:51.798 [2024-06-10 12:08:57.293763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.798 [2024-06-10 12:08:57.293770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.798 [2024-06-10 12:08:57.293776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid418715 for offline analysis/debug. 00:04:51.798 [2024-06-10 12:08:57.293798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.371 12:08:57 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:52.371 12:08:57 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:52.371 12:08:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.371 12:08:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.371 12:08:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.371 12:08:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.371 12:08:57 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.371 12:08:57 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.371 12:08:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.371 ************************************ 00:04:52.371 START TEST rpc_integrity 00:04:52.371 ************************************ 00:04:52.371 12:08:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:52.371 12:08:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.371 12:08:57 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.371 12:08:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.371 12:08:57 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.371 12:08:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.632 12:08:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.632 { 00:04:52.632 "name": "Malloc0", 00:04:52.632 "aliases": [ 00:04:52.632 "111049bb-347d-45ed-b813-1c955481605b" 00:04:52.632 ], 00:04:52.632 "product_name": "Malloc disk", 00:04:52.632 "block_size": 512, 00:04:52.632 "num_blocks": 16384, 00:04:52.632 "uuid": "111049bb-347d-45ed-b813-1c955481605b", 00:04:52.632 "assigned_rate_limits": { 00:04:52.632 "rw_ios_per_sec": 0, 00:04:52.632 "rw_mbytes_per_sec": 0, 00:04:52.632 "r_mbytes_per_sec": 0, 00:04:52.632 "w_mbytes_per_sec": 0 00:04:52.632 }, 00:04:52.632 "claimed": false, 00:04:52.632 "zoned": false, 00:04:52.632 "supported_io_types": { 00:04:52.632 "read": true, 00:04:52.632 "write": true, 00:04:52.632 "unmap": true, 00:04:52.632 "write_zeroes": true, 00:04:52.632 "flush": true, 00:04:52.632 "reset": true, 00:04:52.632 "compare": false, 00:04:52.632 "compare_and_write": false, 00:04:52.632 "abort": true, 00:04:52.632 "nvme_admin": false, 00:04:52.632 "nvme_io": false 00:04:52.632 }, 00:04:52.632 "memory_domains": [ 00:04:52.632 { 00:04:52.632 "dma_device_id": "system", 00:04:52.632 "dma_device_type": 1 00:04:52.632 }, 00:04:52.632 { 00:04:52.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.632 "dma_device_type": 2 00:04:52.632 } 00:04:52.632 ], 00:04:52.632 "driver_specific": {} 00:04:52.632 } 00:04:52.632 ]' 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 [2024-06-10 12:08:58.101363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.632 [2024-06-10 12:08:58.101396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.632 [2024-06-10 12:08:58.101409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97b530 00:04:52.632 [2024-06-10 12:08:58.101416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.632 [2024-06-10 12:08:58.102715] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.632 [2024-06-10 12:08:58.102736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.632 Passthru0 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.632 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.633 { 00:04:52.633 "name": "Malloc0", 00:04:52.633 "aliases": [ 00:04:52.633 "111049bb-347d-45ed-b813-1c955481605b" 00:04:52.633 ], 00:04:52.633 "product_name": "Malloc disk", 00:04:52.633 "block_size": 512, 00:04:52.633 "num_blocks": 16384, 00:04:52.633 "uuid": "111049bb-347d-45ed-b813-1c955481605b", 00:04:52.633 "assigned_rate_limits": { 00:04:52.633 "rw_ios_per_sec": 0, 00:04:52.633 "rw_mbytes_per_sec": 0, 00:04:52.633 "r_mbytes_per_sec": 0, 00:04:52.633 "w_mbytes_per_sec": 0 00:04:52.633 }, 00:04:52.633 "claimed": true, 00:04:52.633 "claim_type": "exclusive_write", 00:04:52.633 "zoned": false, 00:04:52.633 "supported_io_types": { 00:04:52.633 "read": true, 00:04:52.633 "write": true, 00:04:52.633 "unmap": true, 00:04:52.633 "write_zeroes": true, 00:04:52.633 "flush": true, 00:04:52.633 "reset": true, 00:04:52.633 "compare": false, 00:04:52.633 "compare_and_write": false, 00:04:52.633 "abort": true, 00:04:52.633 "nvme_admin": false, 00:04:52.633 "nvme_io": false 00:04:52.633 }, 00:04:52.633 "memory_domains": [ 00:04:52.633 { 00:04:52.633 "dma_device_id": "system", 00:04:52.633 "dma_device_type": 1 00:04:52.633 }, 00:04:52.633 { 00:04:52.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.633 "dma_device_type": 2 00:04:52.633 } 00:04:52.633 ], 00:04:52.633 "driver_specific": {} 00:04:52.633 }, 00:04:52.633 { 00:04:52.633 "name": "Passthru0", 00:04:52.633 "aliases": [ 00:04:52.633 "a52022d9-f0eb-57a1-81be-4124445682df" 00:04:52.633 ], 00:04:52.633 "product_name": "passthru", 00:04:52.633 "block_size": 512, 00:04:52.633 "num_blocks": 16384, 00:04:52.633 "uuid": "a52022d9-f0eb-57a1-81be-4124445682df", 00:04:52.633 "assigned_rate_limits": { 00:04:52.633 "rw_ios_per_sec": 0, 00:04:52.633 "rw_mbytes_per_sec": 0, 00:04:52.633 "r_mbytes_per_sec": 0, 00:04:52.633 "w_mbytes_per_sec": 0 00:04:52.633 }, 00:04:52.633 "claimed": false, 00:04:52.633 "zoned": false, 00:04:52.633 "supported_io_types": { 00:04:52.633 "read": true, 00:04:52.633 "write": true, 00:04:52.633 "unmap": true, 00:04:52.633 "write_zeroes": true, 00:04:52.633 "flush": true, 00:04:52.633 "reset": true, 00:04:52.633 "compare": false, 00:04:52.633 "compare_and_write": false, 00:04:52.633 "abort": true, 00:04:52.633 "nvme_admin": false, 00:04:52.633 "nvme_io": false 00:04:52.633 }, 00:04:52.633 "memory_domains": [ 00:04:52.633 { 00:04:52.633 "dma_device_id": "system", 00:04:52.633 "dma_device_type": 1 00:04:52.633 }, 00:04:52.633 { 00:04:52.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.633 "dma_device_type": 2 00:04:52.633 } 00:04:52.633 ], 00:04:52.633 "driver_specific": { 00:04:52.633 "passthru": { 00:04:52.633 "name": "Passthru0", 00:04:52.633 "base_bdev_name": "Malloc0" 00:04:52.633 } 00:04:52.633 } 00:04:52.633 } 00:04:52.633 ]' 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.633 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.633 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.894 12:08:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.894 00:04:52.894 real 0m0.285s 00:04:52.894 user 0m0.189s 00:04:52.894 sys 0m0.029s 00:04:52.894 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 ************************************ 00:04:52.894 END TEST rpc_integrity 00:04:52.894 ************************************ 00:04:52.894 12:08:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.894 12:08:58 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.894 12:08:58 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.894 12:08:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 ************************************ 00:04:52.894 START TEST rpc_plugins 00:04:52.894 ************************************ 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.894 { 00:04:52.894 "name": "Malloc1", 00:04:52.894 "aliases": [ 00:04:52.894 "16cf586b-0ab1-483f-a930-e2c5774a8ce6" 00:04:52.894 ], 00:04:52.894 "product_name": "Malloc disk", 00:04:52.894 "block_size": 4096, 00:04:52.894 "num_blocks": 256, 00:04:52.894 "uuid": "16cf586b-0ab1-483f-a930-e2c5774a8ce6", 00:04:52.894 "assigned_rate_limits": { 00:04:52.894 "rw_ios_per_sec": 0, 00:04:52.894 "rw_mbytes_per_sec": 0, 00:04:52.894 "r_mbytes_per_sec": 0, 00:04:52.894 "w_mbytes_per_sec": 0 00:04:52.894 }, 00:04:52.894 "claimed": false, 00:04:52.894 "zoned": false, 00:04:52.894 "supported_io_types": { 00:04:52.894 "read": true, 00:04:52.894 "write": true, 00:04:52.894 "unmap": true, 00:04:52.894 "write_zeroes": true, 00:04:52.894 "flush": true, 00:04:52.894 "reset": true, 00:04:52.894 "compare": false, 00:04:52.894 "compare_and_write": false, 00:04:52.894 "abort": true, 00:04:52.894 "nvme_admin": false, 00:04:52.894 "nvme_io": false 00:04:52.894 }, 00:04:52.894 "memory_domains": [ 00:04:52.894 { 00:04:52.894 "dma_device_id": "system", 00:04:52.894 "dma_device_type": 1 00:04:52.894 }, 00:04:52.894 { 00:04:52.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.894 "dma_device_type": 2 00:04:52.894 } 00:04:52.894 ], 00:04:52.894 "driver_specific": {} 00:04:52.894 } 00:04:52.894 ]' 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.894 12:08:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.894 00:04:52.894 real 0m0.147s 00:04:52.894 user 0m0.098s 00:04:52.894 sys 0m0.014s 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.894 12:08:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.894 ************************************ 00:04:52.894 END TEST rpc_plugins 00:04:52.894 ************************************ 00:04:53.154 12:08:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:53.154 12:08:58 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.154 12:08:58 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.154 12:08:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.154 ************************************ 00:04:53.154 START TEST rpc_trace_cmd_test 00:04:53.154 ************************************ 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.154 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:53.154 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid418715", 00:04:53.154 "tpoint_group_mask": "0x8", 00:04:53.154 "iscsi_conn": { 00:04:53.154 "mask": "0x2", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "scsi": { 00:04:53.154 "mask": "0x4", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "bdev": { 00:04:53.154 "mask": "0x8", 00:04:53.154 "tpoint_mask": "0xffffffffffffffff" 00:04:53.154 }, 00:04:53.154 "nvmf_rdma": { 00:04:53.154 "mask": "0x10", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "nvmf_tcp": { 00:04:53.154 "mask": "0x20", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "ftl": { 00:04:53.154 "mask": "0x40", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "blobfs": { 00:04:53.154 "mask": "0x80", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.155 "dsa": { 00:04:53.155 "mask": "0x200", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "thread": { 00:04:53.155 "mask": "0x400", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "nvme_pcie": { 00:04:53.155 "mask": "0x800", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "iaa": { 00:04:53.155 "mask": "0x1000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "nvme_tcp": { 00:04:53.155 "mask": "0x2000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "bdev_nvme": { 00:04:53.155 "mask": "0x4000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "sock": { 00:04:53.155 "mask": "0x8000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 } 00:04:53.155 }' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.155 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.413 12:08:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.413 00:04:53.413 real 0m0.247s 00:04:53.413 user 0m0.213s 00:04:53.413 sys 0m0.026s 00:04:53.413 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.413 12:08:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.413 ************************************ 00:04:53.413 END TEST rpc_trace_cmd_test 00:04:53.413 ************************************ 00:04:53.413 12:08:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.414 12:08:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.414 12:08:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.414 12:08:58 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.414 12:08:58 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.414 12:08:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 ************************************ 00:04:53.414 START TEST rpc_daemon_integrity 00:04:53.414 ************************************ 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.414 { 00:04:53.414 "name": "Malloc2", 00:04:53.414 "aliases": [ 00:04:53.414 "7106bcc7-0cec-4021-ab07-fb25bfff05a9" 00:04:53.414 ], 00:04:53.414 "product_name": "Malloc disk", 00:04:53.414 "block_size": 512, 00:04:53.414 "num_blocks": 16384, 00:04:53.414 "uuid": "7106bcc7-0cec-4021-ab07-fb25bfff05a9", 00:04:53.414 "assigned_rate_limits": { 00:04:53.414 "rw_ios_per_sec": 0, 00:04:53.414 "rw_mbytes_per_sec": 0, 00:04:53.414 "r_mbytes_per_sec": 0, 00:04:53.414 "w_mbytes_per_sec": 0 00:04:53.414 }, 00:04:53.414 "claimed": false, 00:04:53.414 "zoned": false, 00:04:53.414 "supported_io_types": { 00:04:53.414 "read": true, 00:04:53.414 "write": true, 00:04:53.414 "unmap": true, 00:04:53.414 "write_zeroes": true, 00:04:53.414 "flush": true, 00:04:53.414 "reset": true, 00:04:53.414 "compare": false, 00:04:53.414 "compare_and_write": false, 00:04:53.414 "abort": true, 00:04:53.414 "nvme_admin": false, 00:04:53.414 "nvme_io": false 00:04:53.414 }, 00:04:53.414 "memory_domains": [ 00:04:53.414 { 00:04:53.414 "dma_device_id": "system", 00:04:53.414 "dma_device_type": 1 00:04:53.414 }, 00:04:53.414 { 00:04:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.414 "dma_device_type": 2 00:04:53.414 } 00:04:53.414 ], 00:04:53.414 "driver_specific": {} 00:04:53.414 } 00:04:53.414 ]' 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.414 12:08:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 [2024-06-10 12:08:58.999811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.414 [2024-06-10 12:08:58.999843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.414 [2024-06-10 12:08:58.999854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97dc70 00:04:53.414 [2024-06-10 12:08:58.999861] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.414 [2024-06-10 12:08:59.001089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.414 [2024-06-10 12:08:59.001110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.414 Passthru0 00:04:53.414 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.414 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.414 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.414 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.674 { 00:04:53.674 "name": "Malloc2", 00:04:53.674 "aliases": [ 00:04:53.674 "7106bcc7-0cec-4021-ab07-fb25bfff05a9" 00:04:53.674 ], 00:04:53.674 "product_name": "Malloc disk", 00:04:53.674 "block_size": 512, 00:04:53.674 "num_blocks": 16384, 00:04:53.674 "uuid": "7106bcc7-0cec-4021-ab07-fb25bfff05a9", 00:04:53.674 "assigned_rate_limits": { 00:04:53.674 "rw_ios_per_sec": 0, 00:04:53.674 "rw_mbytes_per_sec": 0, 00:04:53.674 "r_mbytes_per_sec": 0, 00:04:53.674 "w_mbytes_per_sec": 0 00:04:53.674 }, 00:04:53.674 "claimed": true, 00:04:53.674 "claim_type": "exclusive_write", 00:04:53.674 "zoned": false, 00:04:53.674 "supported_io_types": { 00:04:53.674 "read": true, 00:04:53.674 "write": true, 00:04:53.674 "unmap": true, 00:04:53.674 "write_zeroes": true, 00:04:53.674 "flush": true, 00:04:53.674 "reset": true, 00:04:53.674 "compare": false, 00:04:53.674 "compare_and_write": false, 00:04:53.674 "abort": true, 00:04:53.674 "nvme_admin": false, 00:04:53.674 "nvme_io": false 00:04:53.674 }, 00:04:53.674 "memory_domains": [ 00:04:53.674 { 00:04:53.674 "dma_device_id": "system", 00:04:53.674 "dma_device_type": 1 00:04:53.674 }, 00:04:53.674 { 00:04:53.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.674 "dma_device_type": 2 00:04:53.674 } 00:04:53.674 ], 00:04:53.674 "driver_specific": {} 00:04:53.674 }, 00:04:53.674 { 00:04:53.674 "name": "Passthru0", 00:04:53.674 "aliases": [ 00:04:53.674 "e0bd79ba-024c-5906-b074-a15d0afa0f68" 00:04:53.674 ], 00:04:53.674 "product_name": "passthru", 00:04:53.674 "block_size": 512, 00:04:53.674 "num_blocks": 16384, 00:04:53.674 "uuid": "e0bd79ba-024c-5906-b074-a15d0afa0f68", 00:04:53.674 "assigned_rate_limits": { 00:04:53.674 "rw_ios_per_sec": 0, 00:04:53.674 "rw_mbytes_per_sec": 0, 00:04:53.674 "r_mbytes_per_sec": 0, 00:04:53.674 "w_mbytes_per_sec": 0 00:04:53.674 }, 00:04:53.674 "claimed": false, 00:04:53.674 "zoned": false, 00:04:53.674 "supported_io_types": { 00:04:53.674 "read": true, 00:04:53.674 "write": true, 00:04:53.674 "unmap": true, 00:04:53.674 "write_zeroes": true, 00:04:53.674 "flush": true, 00:04:53.674 "reset": true, 00:04:53.674 "compare": false, 00:04:53.674 "compare_and_write": false, 00:04:53.674 "abort": true, 00:04:53.674 "nvme_admin": false, 00:04:53.674 "nvme_io": false 00:04:53.674 }, 00:04:53.674 "memory_domains": [ 00:04:53.674 { 00:04:53.674 "dma_device_id": "system", 00:04:53.674 "dma_device_type": 1 00:04:53.674 }, 00:04:53.674 { 00:04:53.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.674 "dma_device_type": 2 00:04:53.674 } 00:04:53.674 ], 00:04:53.674 "driver_specific": { 00:04:53.674 "passthru": { 00:04:53.674 "name": "Passthru0", 00:04:53.674 "base_bdev_name": "Malloc2" 00:04:53.674 } 00:04:53.674 } 00:04:53.674 } 00:04:53.674 ]' 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.674 00:04:53.674 real 0m0.294s 00:04:53.674 user 0m0.181s 00:04:53.674 sys 0m0.043s 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.674 12:08:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.674 ************************************ 00:04:53.674 END TEST rpc_daemon_integrity 00:04:53.674 ************************************ 00:04:53.674 12:08:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.674 12:08:59 rpc -- rpc/rpc.sh@84 -- # killprocess 418715 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@949 -- # '[' -z 418715 ']' 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@953 -- # kill -0 418715 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@954 -- # uname 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 418715 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 418715' 00:04:53.674 killing process with pid 418715 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@968 -- # kill 418715 00:04:53.674 12:08:59 rpc -- common/autotest_common.sh@973 -- # wait 418715 00:04:53.935 00:04:53.935 real 0m2.450s 00:04:53.935 user 0m3.230s 00:04:53.935 sys 0m0.676s 00:04:53.935 12:08:59 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.935 12:08:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.935 ************************************ 00:04:53.935 END TEST rpc 00:04:53.935 ************************************ 00:04:53.935 12:08:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.935 12:08:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.935 12:08:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.935 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.935 ************************************ 00:04:53.935 START TEST skip_rpc 00:04:53.935 ************************************ 00:04:53.935 12:08:59 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:54.196 * Looking for test storage... 00:04:54.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.196 12:08:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.196 12:08:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.196 12:08:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.196 12:08:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.196 12:08:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.196 12:08:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.196 ************************************ 00:04:54.196 START TEST skip_rpc 00:04:54.196 ************************************ 00:04:54.196 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:54.196 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=419256 00:04:54.196 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.196 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.196 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.196 [2024-06-10 12:08:59.712211] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:54.196 [2024-06-10 12:08:59.712256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419256 ] 00:04:54.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.196 [2024-06-10 12:08:59.780256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.456 [2024-06-10 12:08:59.844902] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 419256 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 419256 ']' 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 419256 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 419256 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 419256' 00:04:59.743 killing process with pid 419256 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 419256 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 419256 00:04:59.743 00:04:59.743 real 0m5.278s 00:04:59.743 user 0m5.077s 00:04:59.743 sys 0m0.233s 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.743 12:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.743 ************************************ 00:04:59.743 END TEST skip_rpc 00:04:59.743 ************************************ 00:04:59.743 12:09:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.743 12:09:04 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:59.743 12:09:04 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.743 12:09:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.743 ************************************ 00:04:59.743 START TEST skip_rpc_with_json 00:04:59.743 ************************************ 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=420587 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 420587 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 420587 ']' 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:59.743 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.743 [2024-06-10 12:09:05.072018] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:59.743 [2024-06-10 12:09:05.072078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420587 ] 00:04:59.743 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.743 [2024-06-10 12:09:05.142164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.743 [2024-06-10 12:09:05.216999] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.315 [2024-06-10 12:09:05.832105] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.315 request: 00:05:00.315 { 00:05:00.315 "trtype": "tcp", 00:05:00.315 "method": "nvmf_get_transports", 00:05:00.315 "req_id": 1 00:05:00.315 } 00:05:00.315 Got JSON-RPC error response 00:05:00.315 response: 00:05:00.315 { 00:05:00.315 "code": -19, 00:05:00.315 "message": "No such device" 00:05:00.315 } 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.315 [2024-06-10 12:09:05.844235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.315 12:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.584 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.584 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.584 { 00:05:00.584 "subsystems": [ 00:05:00.584 { 00:05:00.584 "subsystem": "vfio_user_target", 00:05:00.584 "config": null 00:05:00.584 }, 00:05:00.584 { 00:05:00.584 "subsystem": "keyring", 00:05:00.584 "config": [] 00:05:00.584 }, 00:05:00.584 { 00:05:00.584 "subsystem": "iobuf", 00:05:00.584 "config": [ 00:05:00.584 { 00:05:00.585 "method": "iobuf_set_options", 00:05:00.585 "params": { 00:05:00.585 "small_pool_count": 8192, 00:05:00.585 "large_pool_count": 1024, 00:05:00.585 "small_bufsize": 8192, 00:05:00.585 "large_bufsize": 135168 00:05:00.585 } 00:05:00.585 } 00:05:00.585 ] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "sock", 00:05:00.585 "config": [ 00:05:00.585 { 00:05:00.585 "method": "sock_set_default_impl", 00:05:00.585 "params": { 00:05:00.585 "impl_name": "posix" 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "sock_impl_set_options", 00:05:00.585 "params": { 00:05:00.585 "impl_name": "ssl", 00:05:00.585 "recv_buf_size": 4096, 00:05:00.585 "send_buf_size": 4096, 00:05:00.585 "enable_recv_pipe": true, 00:05:00.585 "enable_quickack": false, 00:05:00.585 "enable_placement_id": 0, 00:05:00.585 "enable_zerocopy_send_server": true, 00:05:00.585 "enable_zerocopy_send_client": false, 00:05:00.585 "zerocopy_threshold": 0, 00:05:00.585 "tls_version": 0, 00:05:00.585 "enable_ktls": false 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "sock_impl_set_options", 00:05:00.585 "params": { 00:05:00.585 "impl_name": "posix", 00:05:00.585 "recv_buf_size": 2097152, 00:05:00.585 "send_buf_size": 2097152, 00:05:00.585 "enable_recv_pipe": true, 00:05:00.585 "enable_quickack": false, 00:05:00.585 "enable_placement_id": 0, 00:05:00.585 "enable_zerocopy_send_server": true, 00:05:00.585 "enable_zerocopy_send_client": false, 00:05:00.585 "zerocopy_threshold": 0, 00:05:00.585 "tls_version": 0, 00:05:00.585 "enable_ktls": false 00:05:00.585 } 00:05:00.585 } 00:05:00.585 ] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "vmd", 00:05:00.585 "config": [] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "accel", 00:05:00.585 "config": [ 00:05:00.585 { 00:05:00.585 "method": "accel_set_options", 00:05:00.585 "params": { 00:05:00.585 "small_cache_size": 128, 00:05:00.585 "large_cache_size": 16, 00:05:00.585 "task_count": 2048, 00:05:00.585 "sequence_count": 2048, 00:05:00.585 "buf_count": 2048 00:05:00.585 } 00:05:00.585 } 00:05:00.585 ] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "bdev", 00:05:00.585 "config": [ 00:05:00.585 { 00:05:00.585 "method": "bdev_set_options", 00:05:00.585 "params": { 00:05:00.585 "bdev_io_pool_size": 65535, 00:05:00.585 "bdev_io_cache_size": 256, 00:05:00.585 "bdev_auto_examine": true, 00:05:00.585 "iobuf_small_cache_size": 128, 00:05:00.585 "iobuf_large_cache_size": 16 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "bdev_raid_set_options", 00:05:00.585 "params": { 00:05:00.585 "process_window_size_kb": 1024 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "bdev_iscsi_set_options", 00:05:00.585 "params": { 00:05:00.585 "timeout_sec": 30 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "bdev_nvme_set_options", 00:05:00.585 "params": { 00:05:00.585 "action_on_timeout": "none", 00:05:00.585 "timeout_us": 0, 00:05:00.585 "timeout_admin_us": 0, 00:05:00.585 "keep_alive_timeout_ms": 10000, 00:05:00.585 "arbitration_burst": 0, 00:05:00.585 "low_priority_weight": 0, 00:05:00.585 "medium_priority_weight": 0, 00:05:00.585 "high_priority_weight": 0, 00:05:00.585 "nvme_adminq_poll_period_us": 10000, 00:05:00.585 "nvme_ioq_poll_period_us": 0, 00:05:00.585 "io_queue_requests": 0, 00:05:00.585 "delay_cmd_submit": true, 00:05:00.585 "transport_retry_count": 4, 00:05:00.585 "bdev_retry_count": 3, 00:05:00.585 "transport_ack_timeout": 0, 00:05:00.585 "ctrlr_loss_timeout_sec": 0, 00:05:00.585 "reconnect_delay_sec": 0, 00:05:00.585 "fast_io_fail_timeout_sec": 0, 00:05:00.585 "disable_auto_failback": false, 00:05:00.585 "generate_uuids": false, 00:05:00.585 "transport_tos": 0, 00:05:00.585 "nvme_error_stat": false, 00:05:00.585 "rdma_srq_size": 0, 00:05:00.585 "io_path_stat": false, 00:05:00.585 "allow_accel_sequence": false, 00:05:00.585 "rdma_max_cq_size": 0, 00:05:00.585 "rdma_cm_event_timeout_ms": 0, 00:05:00.585 "dhchap_digests": [ 00:05:00.585 "sha256", 00:05:00.585 "sha384", 00:05:00.585 "sha512" 00:05:00.585 ], 00:05:00.585 "dhchap_dhgroups": [ 00:05:00.585 "null", 00:05:00.585 "ffdhe2048", 00:05:00.585 "ffdhe3072", 00:05:00.585 "ffdhe4096", 00:05:00.585 "ffdhe6144", 00:05:00.585 "ffdhe8192" 00:05:00.585 ] 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "bdev_nvme_set_hotplug", 00:05:00.585 "params": { 00:05:00.585 "period_us": 100000, 00:05:00.585 "enable": false 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "bdev_wait_for_examine" 00:05:00.585 } 00:05:00.585 ] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "scsi", 00:05:00.585 "config": null 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "scheduler", 00:05:00.585 "config": [ 00:05:00.585 { 00:05:00.585 "method": "framework_set_scheduler", 00:05:00.585 "params": { 00:05:00.585 "name": "static" 00:05:00.585 } 00:05:00.585 } 00:05:00.585 ] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "vhost_scsi", 00:05:00.585 "config": [] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "vhost_blk", 00:05:00.585 "config": [] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "ublk", 00:05:00.585 "config": [] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "nbd", 00:05:00.585 "config": [] 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "subsystem": "nvmf", 00:05:00.585 "config": [ 00:05:00.585 { 00:05:00.585 "method": "nvmf_set_config", 00:05:00.585 "params": { 00:05:00.585 "discovery_filter": "match_any", 00:05:00.585 "admin_cmd_passthru": { 00:05:00.585 "identify_ctrlr": false 00:05:00.585 } 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "nvmf_set_max_subsystems", 00:05:00.585 "params": { 00:05:00.585 "max_subsystems": 1024 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "nvmf_set_crdt", 00:05:00.585 "params": { 00:05:00.585 "crdt1": 0, 00:05:00.585 "crdt2": 0, 00:05:00.585 "crdt3": 0 00:05:00.585 } 00:05:00.585 }, 00:05:00.585 { 00:05:00.585 "method": "nvmf_create_transport", 00:05:00.585 "params": { 00:05:00.585 "trtype": "TCP", 00:05:00.585 "max_queue_depth": 128, 00:05:00.585 "max_io_qpairs_per_ctrlr": 127, 00:05:00.585 "in_capsule_data_size": 4096, 00:05:00.585 "max_io_size": 131072, 00:05:00.586 "io_unit_size": 131072, 00:05:00.586 "max_aq_depth": 128, 00:05:00.586 "num_shared_buffers": 511, 00:05:00.586 "buf_cache_size": 4294967295, 00:05:00.586 "dif_insert_or_strip": false, 00:05:00.586 "zcopy": false, 00:05:00.586 "c2h_success": true, 00:05:00.586 "sock_priority": 0, 00:05:00.586 "abort_timeout_sec": 1, 00:05:00.586 "ack_timeout": 0, 00:05:00.586 "data_wr_pool_size": 0 00:05:00.586 } 00:05:00.586 } 00:05:00.586 ] 00:05:00.586 }, 00:05:00.586 { 00:05:00.586 "subsystem": "iscsi", 00:05:00.586 "config": [ 00:05:00.586 { 00:05:00.586 "method": "iscsi_set_options", 00:05:00.586 "params": { 00:05:00.586 "node_base": "iqn.2016-06.io.spdk", 00:05:00.586 "max_sessions": 128, 00:05:00.586 "max_connections_per_session": 2, 00:05:00.586 "max_queue_depth": 64, 00:05:00.586 "default_time2wait": 2, 00:05:00.586 "default_time2retain": 20, 00:05:00.586 "first_burst_length": 8192, 00:05:00.586 "immediate_data": true, 00:05:00.586 "allow_duplicated_isid": false, 00:05:00.586 "error_recovery_level": 0, 00:05:00.586 "nop_timeout": 60, 00:05:00.586 "nop_in_interval": 30, 00:05:00.586 "disable_chap": false, 00:05:00.586 "require_chap": false, 00:05:00.586 "mutual_chap": false, 00:05:00.586 "chap_group": 0, 00:05:00.586 "max_large_datain_per_connection": 64, 00:05:00.586 "max_r2t_per_connection": 4, 00:05:00.586 "pdu_pool_size": 36864, 00:05:00.586 "immediate_data_pool_size": 16384, 00:05:00.586 "data_out_pool_size": 2048 00:05:00.586 } 00:05:00.586 } 00:05:00.586 ] 00:05:00.586 } 00:05:00.586 ] 00:05:00.586 } 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 420587 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 420587 ']' 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 420587 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 420587 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 420587' 00:05:00.586 killing process with pid 420587 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 420587 00:05:00.586 12:09:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 420587 00:05:00.851 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=420736 00:05:00.851 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:00.851 12:09:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 420736 ']' 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 420736' 00:05:06.189 killing process with pid 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 420736 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.189 00:05:06.189 real 0m6.538s 00:05:06.189 user 0m6.375s 00:05:06.189 sys 0m0.564s 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 ************************************ 00:05:06.189 END TEST skip_rpc_with_json 00:05:06.189 ************************************ 00:05:06.189 12:09:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 ************************************ 00:05:06.189 START TEST skip_rpc_with_delay 00:05:06.189 ************************************ 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.189 [2024-06-10 12:09:11.687777] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.189 [2024-06-10 12:09:11.687863] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:06.189 00:05:06.189 real 0m0.075s 00:05:06.189 user 0m0.042s 00:05:06.189 sys 0m0.033s 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.189 12:09:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 ************************************ 00:05:06.189 END TEST skip_rpc_with_delay 00:05:06.189 ************************************ 00:05:06.189 12:09:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.189 12:09:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.189 12:09:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.189 12:09:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 ************************************ 00:05:06.189 START TEST exit_on_failed_rpc_init 00:05:06.189 ************************************ 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=422453 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 422453 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 422453 ']' 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:06.189 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.450 [2024-06-10 12:09:11.847114] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:06.450 [2024-06-10 12:09:11.847172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422453 ] 00:05:06.450 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.450 [2024-06-10 12:09:11.917943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.450 [2024-06-10 12:09:11.992720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:07.021 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.281 [2024-06-10 12:09:12.665937] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:07.281 [2024-06-10 12:09:12.665987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422567 ] 00:05:07.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.281 [2024-06-10 12:09:12.746530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.281 [2024-06-10 12:09:12.810760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.281 [2024-06-10 12:09:12.810821] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:07.281 [2024-06-10 12:09:12.810831] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:07.281 [2024-06-10 12:09:12.810837] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 422453 00:05:07.281 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 422453 ']' 00:05:07.282 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 422453 00:05:07.282 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:07.282 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:07.282 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 422453 00:05:07.543 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:07.543 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:07.543 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 422453' 00:05:07.543 killing process with pid 422453 00:05:07.543 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 422453 00:05:07.543 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 422453 00:05:07.543 00:05:07.543 real 0m1.345s 00:05:07.543 user 0m1.555s 00:05:07.543 sys 0m0.393s 00:05:07.543 12:09:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.543 12:09:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.543 ************************************ 00:05:07.543 END TEST exit_on_failed_rpc_init 00:05:07.543 ************************************ 00:05:07.803 12:09:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.803 00:05:07.803 real 0m13.648s 00:05:07.803 user 0m13.216s 00:05:07.803 sys 0m1.490s 00:05:07.803 12:09:13 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.803 12:09:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.803 ************************************ 00:05:07.803 END TEST skip_rpc 00:05:07.803 ************************************ 00:05:07.803 12:09:13 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.803 12:09:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:07.803 12:09:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:07.803 12:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:07.804 ************************************ 00:05:07.804 START TEST rpc_client 00:05:07.804 ************************************ 00:05:07.804 12:09:13 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.804 * Looking for test storage... 00:05:07.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:07.804 12:09:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:07.804 OK 00:05:07.804 12:09:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.804 00:05:07.804 real 0m0.127s 00:05:07.804 user 0m0.053s 00:05:07.804 sys 0m0.083s 00:05:07.804 12:09:13 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.804 12:09:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.804 ************************************ 00:05:07.804 END TEST rpc_client 00:05:07.804 ************************************ 00:05:08.064 12:09:13 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.064 12:09:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:08.064 12:09:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.064 12:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.064 ************************************ 00:05:08.064 START TEST json_config 00:05:08.064 ************************************ 00:05:08.064 12:09:13 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.064 12:09:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.064 12:09:13 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.064 12:09:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.064 12:09:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.064 12:09:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.064 12:09:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.064 12:09:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.065 12:09:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.065 12:09:13 json_config -- paths/export.sh@5 -- # export PATH 00:05:08.065 12:09:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@47 -- # : 0 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.065 12:09:13 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:08.065 INFO: JSON configuration test init 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.065 12:09:13 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:08.065 12:09:13 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.065 12:09:13 json_config -- json_config/common.sh@10 -- # shift 00:05:08.065 12:09:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.065 12:09:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.065 12:09:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.065 12:09:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.065 12:09:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.065 12:09:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=422950 00:05:08.065 12:09:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.065 Waiting for target to run... 00:05:08.065 12:09:13 json_config -- json_config/common.sh@25 -- # waitforlisten 422950 /var/tmp/spdk_tgt.sock 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@830 -- # '[' -z 422950 ']' 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.065 12:09:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:08.065 12:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.065 [2024-06-10 12:09:13.627256] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:08.065 [2024-06-10 12:09:13.627322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422950 ] 00:05:08.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.325 [2024-06-10 12:09:13.924500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.586 [2024-06-10 12:09:13.976713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:08.848 12:09:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.848 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:08.848 12:09:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.848 12:09:14 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:08.848 12:09:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.420 12:09:14 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:09.420 12:09:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:09.420 12:09:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.420 12:09:14 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:09.681 12:09:15 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:09.681 12:09:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:09.681 12:09:15 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:09.681 12:09:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:09.681 12:09:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.681 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.940 MallocForNvmf0 00:05:09.940 12:09:15 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.940 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.940 MallocForNvmf1 00:05:09.940 12:09:15 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.940 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.200 [2024-06-10 12:09:15.648243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.200 12:09:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.200 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.460 12:09:15 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.460 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.460 12:09:15 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.460 12:09:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.721 12:09:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.721 12:09:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.721 [2024-06-10 12:09:16.234324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.721 12:09:16 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:10.721 12:09:16 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:10.721 12:09:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.721 12:09:16 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:10.721 12:09:16 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:10.721 12:09:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.721 12:09:16 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:10.721 12:09:16 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.721 12:09:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.981 MallocBdevForConfigChangeCheck 00:05:10.981 12:09:16 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:10.981 12:09:16 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:10.981 12:09:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.981 12:09:16 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:10.981 12:09:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.241 12:09:16 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:11.241 INFO: shutting down applications... 00:05:11.241 12:09:16 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:11.241 12:09:16 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:11.241 12:09:16 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:11.241 12:09:16 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.811 Calling clear_iscsi_subsystem 00:05:11.811 Calling clear_nvmf_subsystem 00:05:11.811 Calling clear_nbd_subsystem 00:05:11.811 Calling clear_ublk_subsystem 00:05:11.811 Calling clear_vhost_blk_subsystem 00:05:11.811 Calling clear_vhost_scsi_subsystem 00:05:11.811 Calling clear_bdev_subsystem 00:05:11.811 12:09:17 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.811 12:09:17 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:11.811 12:09:17 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:11.811 12:09:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.812 12:09:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.812 12:09:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.072 12:09:17 json_config -- json_config/json_config.sh@345 -- # break 00:05:12.072 12:09:17 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:12.072 12:09:17 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:12.072 12:09:17 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.072 12:09:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.072 12:09:17 json_config -- json_config/common.sh@35 -- # [[ -n 422950 ]] 00:05:12.072 12:09:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 422950 00:05:12.072 12:09:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.072 12:09:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.072 12:09:17 json_config -- json_config/common.sh@41 -- # kill -0 422950 00:05:12.072 12:09:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.644 12:09:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.644 12:09:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.644 12:09:18 json_config -- json_config/common.sh@41 -- # kill -0 422950 00:05:12.644 12:09:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.644 12:09:18 json_config -- json_config/common.sh@43 -- # break 00:05:12.644 12:09:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.644 12:09:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.644 SPDK target shutdown done 00:05:12.644 12:09:18 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:12.644 INFO: relaunching applications... 00:05:12.644 12:09:18 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.644 12:09:18 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.644 12:09:18 json_config -- json_config/common.sh@10 -- # shift 00:05:12.644 12:09:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.644 12:09:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.644 12:09:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.644 12:09:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.644 12:09:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.644 12:09:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=423838 00:05:12.644 12:09:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.644 Waiting for target to run... 00:05:12.644 12:09:18 json_config -- json_config/common.sh@25 -- # waitforlisten 423838 /var/tmp/spdk_tgt.sock 00:05:12.644 12:09:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@830 -- # '[' -z 423838 ']' 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:12.644 12:09:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.644 [2024-06-10 12:09:18.124040] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:12.644 [2024-06-10 12:09:18.124105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423838 ] 00:05:12.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.905 [2024-06-10 12:09:18.402477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.905 [2024-06-10 12:09:18.452729] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.476 [2024-06-10 12:09:18.954504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.476 [2024-06-10 12:09:18.986996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.476 12:09:19 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:13.476 12:09:19 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:13.476 12:09:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.476 00:05:13.476 12:09:19 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:13.476 12:09:19 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.476 INFO: Checking if target configuration is the same... 00:05:13.476 12:09:19 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.476 12:09:19 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:13.476 12:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.476 + '[' 2 -ne 2 ']' 00:05:13.476 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.476 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.476 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.476 +++ basename /dev/fd/62 00:05:13.476 ++ mktemp /tmp/62.XXX 00:05:13.476 + tmp_file_1=/tmp/62.UAh 00:05:13.476 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.476 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.476 + tmp_file_2=/tmp/spdk_tgt_config.json.M7c 00:05:13.476 + ret=0 00:05:13.476 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.737 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.999 + diff -u /tmp/62.UAh /tmp/spdk_tgt_config.json.M7c 00:05:13.999 + echo 'INFO: JSON config files are the same' 00:05:13.999 INFO: JSON config files are the same 00:05:13.999 + rm /tmp/62.UAh /tmp/spdk_tgt_config.json.M7c 00:05:13.999 + exit 0 00:05:13.999 12:09:19 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:13.999 12:09:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.999 INFO: changing configuration and checking if this can be detected... 00:05:13.999 12:09:19 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.999 12:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.999 12:09:19 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:13.999 12:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.999 12:09:19 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.999 + '[' 2 -ne 2 ']' 00:05:13.999 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.999 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.999 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.999 +++ basename /dev/fd/62 00:05:13.999 ++ mktemp /tmp/62.XXX 00:05:13.999 + tmp_file_1=/tmp/62.UZo 00:05:13.999 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.999 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.999 + tmp_file_2=/tmp/spdk_tgt_config.json.nt2 00:05:13.999 + ret=0 00:05:13.999 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.260 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.521 + diff -u /tmp/62.UZo /tmp/spdk_tgt_config.json.nt2 00:05:14.521 + ret=1 00:05:14.521 + echo '=== Start of file: /tmp/62.UZo ===' 00:05:14.521 + cat /tmp/62.UZo 00:05:14.521 + echo '=== End of file: /tmp/62.UZo ===' 00:05:14.521 + echo '' 00:05:14.521 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nt2 ===' 00:05:14.521 + cat /tmp/spdk_tgt_config.json.nt2 00:05:14.521 + echo '=== End of file: /tmp/spdk_tgt_config.json.nt2 ===' 00:05:14.521 + echo '' 00:05:14.521 + rm /tmp/62.UZo /tmp/spdk_tgt_config.json.nt2 00:05:14.521 + exit 1 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:14.521 INFO: configuration change detected. 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 423838 ]] 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.521 12:09:19 json_config -- json_config/json_config.sh@323 -- # killprocess 423838 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@949 -- # '[' -z 423838 ']' 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@953 -- # kill -0 423838 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@954 -- # uname 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:14.521 12:09:19 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 423838 00:05:14.521 12:09:20 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:14.521 12:09:20 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:14.521 12:09:20 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 423838' 00:05:14.521 killing process with pid 423838 00:05:14.521 12:09:20 json_config -- common/autotest_common.sh@968 -- # kill 423838 00:05:14.521 12:09:20 json_config -- common/autotest_common.sh@973 -- # wait 423838 00:05:14.782 12:09:20 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.782 12:09:20 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:14.782 12:09:20 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:14.782 12:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.782 12:09:20 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:14.782 12:09:20 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:14.782 INFO: Success 00:05:14.782 00:05:14.782 real 0m6.904s 00:05:14.782 user 0m8.327s 00:05:14.782 sys 0m1.702s 00:05:14.782 12:09:20 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.782 12:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.782 ************************************ 00:05:14.782 END TEST json_config 00:05:14.782 ************************************ 00:05:14.782 12:09:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.782 12:09:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.782 12:09:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.782 12:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:15.043 ************************************ 00:05:15.043 START TEST json_config_extra_key 00:05:15.043 ************************************ 00:05:15.043 12:09:20 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.043 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.043 12:09:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.043 12:09:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.043 12:09:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.043 12:09:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.043 12:09:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.043 12:09:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.043 12:09:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.043 12:09:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.043 12:09:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:15.044 12:09:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:15.044 12:09:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.044 INFO: launching applications... 00:05:15.044 12:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=424594 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.044 Waiting for target to run... 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 424594 /var/tmp/spdk_tgt.sock 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 424594 ']' 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.044 12:09:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:15.044 12:09:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.044 [2024-06-10 12:09:20.579087] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:15.044 [2024-06-10 12:09:20.579158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424594 ] 00:05:15.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.305 [2024-06-10 12:09:20.886794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.567 [2024-06-10 12:09:20.942567] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.827 12:09:21 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:15.827 12:09:21 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.827 00:05:15.827 12:09:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.827 INFO: shutting down applications... 00:05:15.827 12:09:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 424594 ]] 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 424594 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 424594 00:05:15.827 12:09:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 424594 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.398 12:09:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.398 SPDK target shutdown done 00:05:16.398 12:09:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.398 Success 00:05:16.398 00:05:16.398 real 0m1.431s 00:05:16.398 user 0m1.045s 00:05:16.398 sys 0m0.397s 00:05:16.398 12:09:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.398 12:09:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.398 END TEST json_config_extra_key 00:05:16.398 ************************************ 00:05:16.398 12:09:21 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.398 12:09:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:16.398 12:09:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.398 12:09:21 -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.398 START TEST alias_rpc 00:05:16.398 ************************************ 00:05:16.398 12:09:21 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.662 * Looking for test storage... 00:05:16.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:16.662 12:09:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.662 12:09:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=424963 00:05:16.662 12:09:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 424963 00:05:16.662 12:09:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 424963 ']' 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:16.662 12:09:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.662 [2024-06-10 12:09:22.093299] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:16.662 [2024-06-10 12:09:22.093362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424963 ] 00:05:16.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.662 [2024-06-10 12:09:22.167720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.662 [2024-06-10 12:09:22.241470] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.604 12:09:22 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:17.604 12:09:22 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:17.604 12:09:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:17.604 12:09:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 424963 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 424963 ']' 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 424963 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 424963 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 424963' 00:05:17.604 killing process with pid 424963 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@968 -- # kill 424963 00:05:17.604 12:09:23 alias_rpc -- common/autotest_common.sh@973 -- # wait 424963 00:05:17.865 00:05:17.865 real 0m1.382s 00:05:17.865 user 0m1.512s 00:05:17.865 sys 0m0.382s 00:05:17.865 12:09:23 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.865 12:09:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 ************************************ 00:05:17.865 END TEST alias_rpc 00:05:17.865 ************************************ 00:05:17.865 12:09:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:17.865 12:09:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.865 12:09:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.865 12:09:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.865 12:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 ************************************ 00:05:17.865 START TEST spdkcli_tcp 00:05:17.865 ************************************ 00:05:17.865 12:09:23 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.125 * Looking for test storage... 00:05:18.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=425239 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 425239 00:05:18.125 12:09:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 425239 ']' 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:18.125 12:09:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.125 [2024-06-10 12:09:23.553566] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:18.125 [2024-06-10 12:09:23.553635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425239 ] 00:05:18.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.125 [2024-06-10 12:09:23.627594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.125 [2024-06-10 12:09:23.703173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.125 [2024-06-10 12:09:23.703175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.727 12:09:24 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.727 12:09:24 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:18.727 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=425375 00:05:18.727 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.727 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.990 [ 00:05:18.990 "bdev_malloc_delete", 00:05:18.990 "bdev_malloc_create", 00:05:18.990 "bdev_null_resize", 00:05:18.990 "bdev_null_delete", 00:05:18.990 "bdev_null_create", 00:05:18.990 "bdev_nvme_cuse_unregister", 00:05:18.990 "bdev_nvme_cuse_register", 00:05:18.990 "bdev_opal_new_user", 00:05:18.990 "bdev_opal_set_lock_state", 00:05:18.990 "bdev_opal_delete", 00:05:18.990 "bdev_opal_get_info", 00:05:18.990 "bdev_opal_create", 00:05:18.990 "bdev_nvme_opal_revert", 00:05:18.990 "bdev_nvme_opal_init", 00:05:18.990 "bdev_nvme_send_cmd", 00:05:18.990 "bdev_nvme_get_path_iostat", 00:05:18.990 "bdev_nvme_get_mdns_discovery_info", 00:05:18.990 "bdev_nvme_stop_mdns_discovery", 00:05:18.990 "bdev_nvme_start_mdns_discovery", 00:05:18.990 "bdev_nvme_set_multipath_policy", 00:05:18.990 "bdev_nvme_set_preferred_path", 00:05:18.990 "bdev_nvme_get_io_paths", 00:05:18.990 "bdev_nvme_remove_error_injection", 00:05:18.990 "bdev_nvme_add_error_injection", 00:05:18.990 "bdev_nvme_get_discovery_info", 00:05:18.990 "bdev_nvme_stop_discovery", 00:05:18.990 "bdev_nvme_start_discovery", 00:05:18.990 "bdev_nvme_get_controller_health_info", 00:05:18.990 "bdev_nvme_disable_controller", 00:05:18.990 "bdev_nvme_enable_controller", 00:05:18.990 "bdev_nvme_reset_controller", 00:05:18.990 "bdev_nvme_get_transport_statistics", 00:05:18.990 "bdev_nvme_apply_firmware", 00:05:18.990 "bdev_nvme_detach_controller", 00:05:18.990 "bdev_nvme_get_controllers", 00:05:18.990 "bdev_nvme_attach_controller", 00:05:18.990 "bdev_nvme_set_hotplug", 00:05:18.990 "bdev_nvme_set_options", 00:05:18.990 "bdev_passthru_delete", 00:05:18.990 "bdev_passthru_create", 00:05:18.990 "bdev_lvol_set_parent_bdev", 00:05:18.990 "bdev_lvol_set_parent", 00:05:18.990 "bdev_lvol_check_shallow_copy", 00:05:18.990 "bdev_lvol_start_shallow_copy", 00:05:18.990 "bdev_lvol_grow_lvstore", 00:05:18.990 "bdev_lvol_get_lvols", 00:05:18.990 "bdev_lvol_get_lvstores", 00:05:18.990 "bdev_lvol_delete", 00:05:18.990 "bdev_lvol_set_read_only", 00:05:18.990 "bdev_lvol_resize", 00:05:18.990 "bdev_lvol_decouple_parent", 00:05:18.990 "bdev_lvol_inflate", 00:05:18.990 "bdev_lvol_rename", 00:05:18.990 "bdev_lvol_clone_bdev", 00:05:18.990 "bdev_lvol_clone", 00:05:18.990 "bdev_lvol_snapshot", 00:05:18.990 "bdev_lvol_create", 00:05:18.990 "bdev_lvol_delete_lvstore", 00:05:18.990 "bdev_lvol_rename_lvstore", 00:05:18.990 "bdev_lvol_create_lvstore", 00:05:18.990 "bdev_raid_set_options", 00:05:18.990 "bdev_raid_remove_base_bdev", 00:05:18.990 "bdev_raid_add_base_bdev", 00:05:18.990 "bdev_raid_delete", 00:05:18.990 "bdev_raid_create", 00:05:18.990 "bdev_raid_get_bdevs", 00:05:18.990 "bdev_error_inject_error", 00:05:18.990 "bdev_error_delete", 00:05:18.990 "bdev_error_create", 00:05:18.990 "bdev_split_delete", 00:05:18.990 "bdev_split_create", 00:05:18.990 "bdev_delay_delete", 00:05:18.990 "bdev_delay_create", 00:05:18.990 "bdev_delay_update_latency", 00:05:18.990 "bdev_zone_block_delete", 00:05:18.990 "bdev_zone_block_create", 00:05:18.990 "blobfs_create", 00:05:18.990 "blobfs_detect", 00:05:18.990 "blobfs_set_cache_size", 00:05:18.990 "bdev_aio_delete", 00:05:18.990 "bdev_aio_rescan", 00:05:18.990 "bdev_aio_create", 00:05:18.990 "bdev_ftl_set_property", 00:05:18.990 "bdev_ftl_get_properties", 00:05:18.990 "bdev_ftl_get_stats", 00:05:18.990 "bdev_ftl_unmap", 00:05:18.990 "bdev_ftl_unload", 00:05:18.990 "bdev_ftl_delete", 00:05:18.990 "bdev_ftl_load", 00:05:18.990 "bdev_ftl_create", 00:05:18.990 "bdev_virtio_attach_controller", 00:05:18.990 "bdev_virtio_scsi_get_devices", 00:05:18.990 "bdev_virtio_detach_controller", 00:05:18.990 "bdev_virtio_blk_set_hotplug", 00:05:18.990 "bdev_iscsi_delete", 00:05:18.990 "bdev_iscsi_create", 00:05:18.990 "bdev_iscsi_set_options", 00:05:18.990 "accel_error_inject_error", 00:05:18.990 "ioat_scan_accel_module", 00:05:18.990 "dsa_scan_accel_module", 00:05:18.990 "iaa_scan_accel_module", 00:05:18.990 "vfu_virtio_create_scsi_endpoint", 00:05:18.990 "vfu_virtio_scsi_remove_target", 00:05:18.990 "vfu_virtio_scsi_add_target", 00:05:18.990 "vfu_virtio_create_blk_endpoint", 00:05:18.990 "vfu_virtio_delete_endpoint", 00:05:18.990 "keyring_file_remove_key", 00:05:18.990 "keyring_file_add_key", 00:05:18.990 "keyring_linux_set_options", 00:05:18.990 "iscsi_get_histogram", 00:05:18.990 "iscsi_enable_histogram", 00:05:18.990 "iscsi_set_options", 00:05:18.990 "iscsi_get_auth_groups", 00:05:18.990 "iscsi_auth_group_remove_secret", 00:05:18.990 "iscsi_auth_group_add_secret", 00:05:18.990 "iscsi_delete_auth_group", 00:05:18.990 "iscsi_create_auth_group", 00:05:18.990 "iscsi_set_discovery_auth", 00:05:18.990 "iscsi_get_options", 00:05:18.990 "iscsi_target_node_request_logout", 00:05:18.990 "iscsi_target_node_set_redirect", 00:05:18.990 "iscsi_target_node_set_auth", 00:05:18.990 "iscsi_target_node_add_lun", 00:05:18.990 "iscsi_get_stats", 00:05:18.990 "iscsi_get_connections", 00:05:18.990 "iscsi_portal_group_set_auth", 00:05:18.990 "iscsi_start_portal_group", 00:05:18.990 "iscsi_delete_portal_group", 00:05:18.990 "iscsi_create_portal_group", 00:05:18.990 "iscsi_get_portal_groups", 00:05:18.990 "iscsi_delete_target_node", 00:05:18.990 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.990 "iscsi_target_node_add_pg_ig_maps", 00:05:18.990 "iscsi_create_target_node", 00:05:18.990 "iscsi_get_target_nodes", 00:05:18.990 "iscsi_delete_initiator_group", 00:05:18.990 "iscsi_initiator_group_remove_initiators", 00:05:18.990 "iscsi_initiator_group_add_initiators", 00:05:18.990 "iscsi_create_initiator_group", 00:05:18.990 "iscsi_get_initiator_groups", 00:05:18.990 "nvmf_set_crdt", 00:05:18.990 "nvmf_set_config", 00:05:18.990 "nvmf_set_max_subsystems", 00:05:18.990 "nvmf_stop_mdns_prr", 00:05:18.990 "nvmf_publish_mdns_prr", 00:05:18.990 "nvmf_subsystem_get_listeners", 00:05:18.990 "nvmf_subsystem_get_qpairs", 00:05:18.990 "nvmf_subsystem_get_controllers", 00:05:18.990 "nvmf_get_stats", 00:05:18.990 "nvmf_get_transports", 00:05:18.990 "nvmf_create_transport", 00:05:18.990 "nvmf_get_targets", 00:05:18.990 "nvmf_delete_target", 00:05:18.990 "nvmf_create_target", 00:05:18.990 "nvmf_subsystem_allow_any_host", 00:05:18.990 "nvmf_subsystem_remove_host", 00:05:18.990 "nvmf_subsystem_add_host", 00:05:18.990 "nvmf_ns_remove_host", 00:05:18.990 "nvmf_ns_add_host", 00:05:18.990 "nvmf_subsystem_remove_ns", 00:05:18.990 "nvmf_subsystem_add_ns", 00:05:18.990 "nvmf_subsystem_listener_set_ana_state", 00:05:18.990 "nvmf_discovery_get_referrals", 00:05:18.990 "nvmf_discovery_remove_referral", 00:05:18.990 "nvmf_discovery_add_referral", 00:05:18.990 "nvmf_subsystem_remove_listener", 00:05:18.990 "nvmf_subsystem_add_listener", 00:05:18.990 "nvmf_delete_subsystem", 00:05:18.990 "nvmf_create_subsystem", 00:05:18.990 "nvmf_get_subsystems", 00:05:18.990 "env_dpdk_get_mem_stats", 00:05:18.990 "nbd_get_disks", 00:05:18.990 "nbd_stop_disk", 00:05:18.990 "nbd_start_disk", 00:05:18.990 "ublk_recover_disk", 00:05:18.990 "ublk_get_disks", 00:05:18.990 "ublk_stop_disk", 00:05:18.990 "ublk_start_disk", 00:05:18.990 "ublk_destroy_target", 00:05:18.990 "ublk_create_target", 00:05:18.990 "virtio_blk_create_transport", 00:05:18.990 "virtio_blk_get_transports", 00:05:18.990 "vhost_controller_set_coalescing", 00:05:18.990 "vhost_get_controllers", 00:05:18.990 "vhost_delete_controller", 00:05:18.990 "vhost_create_blk_controller", 00:05:18.990 "vhost_scsi_controller_remove_target", 00:05:18.990 "vhost_scsi_controller_add_target", 00:05:18.990 "vhost_start_scsi_controller", 00:05:18.990 "vhost_create_scsi_controller", 00:05:18.990 "thread_set_cpumask", 00:05:18.990 "framework_get_scheduler", 00:05:18.990 "framework_set_scheduler", 00:05:18.990 "framework_get_reactors", 00:05:18.990 "thread_get_io_channels", 00:05:18.990 "thread_get_pollers", 00:05:18.990 "thread_get_stats", 00:05:18.990 "framework_monitor_context_switch", 00:05:18.990 "spdk_kill_instance", 00:05:18.990 "log_enable_timestamps", 00:05:18.990 "log_get_flags", 00:05:18.990 "log_clear_flag", 00:05:18.990 "log_set_flag", 00:05:18.990 "log_get_level", 00:05:18.990 "log_set_level", 00:05:18.990 "log_get_print_level", 00:05:18.990 "log_set_print_level", 00:05:18.990 "framework_enable_cpumask_locks", 00:05:18.990 "framework_disable_cpumask_locks", 00:05:18.991 "framework_wait_init", 00:05:18.991 "framework_start_init", 00:05:18.991 "scsi_get_devices", 00:05:18.991 "bdev_get_histogram", 00:05:18.991 "bdev_enable_histogram", 00:05:18.991 "bdev_set_qos_limit", 00:05:18.991 "bdev_set_qd_sampling_period", 00:05:18.991 "bdev_get_bdevs", 00:05:18.991 "bdev_reset_iostat", 00:05:18.991 "bdev_get_iostat", 00:05:18.991 "bdev_examine", 00:05:18.991 "bdev_wait_for_examine", 00:05:18.991 "bdev_set_options", 00:05:18.991 "notify_get_notifications", 00:05:18.991 "notify_get_types", 00:05:18.991 "accel_get_stats", 00:05:18.991 "accel_set_options", 00:05:18.991 "accel_set_driver", 00:05:18.991 "accel_crypto_key_destroy", 00:05:18.991 "accel_crypto_keys_get", 00:05:18.991 "accel_crypto_key_create", 00:05:18.991 "accel_assign_opc", 00:05:18.991 "accel_get_module_info", 00:05:18.991 "accel_get_opc_assignments", 00:05:18.991 "vmd_rescan", 00:05:18.991 "vmd_remove_device", 00:05:18.991 "vmd_enable", 00:05:18.991 "sock_get_default_impl", 00:05:18.991 "sock_set_default_impl", 00:05:18.991 "sock_impl_set_options", 00:05:18.991 "sock_impl_get_options", 00:05:18.991 "iobuf_get_stats", 00:05:18.991 "iobuf_set_options", 00:05:18.991 "keyring_get_keys", 00:05:18.991 "framework_get_pci_devices", 00:05:18.991 "framework_get_config", 00:05:18.991 "framework_get_subsystems", 00:05:18.991 "vfu_tgt_set_base_path", 00:05:18.991 "trace_get_info", 00:05:18.991 "trace_get_tpoint_group_mask", 00:05:18.991 "trace_disable_tpoint_group", 00:05:18.991 "trace_enable_tpoint_group", 00:05:18.991 "trace_clear_tpoint_mask", 00:05:18.991 "trace_set_tpoint_mask", 00:05:18.991 "spdk_get_version", 00:05:18.991 "rpc_get_methods" 00:05:18.991 ] 00:05:18.991 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.991 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.991 12:09:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 425239 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 425239 ']' 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 425239 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 425239 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 425239' 00:05:18.991 killing process with pid 425239 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 425239 00:05:18.991 12:09:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 425239 00:05:19.250 00:05:19.250 real 0m1.405s 00:05:19.250 user 0m2.572s 00:05:19.250 sys 0m0.417s 00:05:19.250 12:09:24 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.250 12:09:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.250 ************************************ 00:05:19.250 END TEST spdkcli_tcp 00:05:19.250 ************************************ 00:05:19.250 12:09:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.250 12:09:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.250 12:09:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.250 12:09:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.510 ************************************ 00:05:19.510 START TEST dpdk_mem_utility 00:05:19.510 ************************************ 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.510 * Looking for test storage... 00:05:19.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.510 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.510 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=425543 00:05:19.510 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 425543 00:05:19.510 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 425543 ']' 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:19.510 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.510 [2024-06-10 12:09:25.020514] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:19.510 [2024-06-10 12:09:25.020585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425543 ] 00:05:19.510 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.510 [2024-06-10 12:09:25.091696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.769 [2024-06-10 12:09:25.167802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.339 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:20.339 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:20.339 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.339 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.339 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.339 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.339 { 00:05:20.339 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.339 } 00:05:20.339 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:20.339 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.339 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:20.339 1 heaps totaling size 814.000000 MiB 00:05:20.339 size: 814.000000 MiB heap id: 0 00:05:20.339 end heaps---------- 00:05:20.339 8 mempools totaling size 598.116089 MiB 00:05:20.339 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.339 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.339 size: 84.521057 MiB name: bdev_io_425543 00:05:20.339 size: 51.011292 MiB name: evtpool_425543 00:05:20.339 size: 50.003479 MiB name: msgpool_425543 00:05:20.339 size: 21.763794 MiB name: PDU_Pool 00:05:20.339 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.339 size: 0.026123 MiB name: Session_Pool 00:05:20.339 end mempools------- 00:05:20.339 6 memzones totaling size 4.142822 MiB 00:05:20.339 size: 1.000366 MiB name: RG_ring_0_425543 00:05:20.339 size: 1.000366 MiB name: RG_ring_1_425543 00:05:20.339 size: 1.000366 MiB name: RG_ring_4_425543 00:05:20.340 size: 1.000366 MiB name: RG_ring_5_425543 00:05:20.340 size: 0.125366 MiB name: RG_ring_2_425543 00:05:20.340 size: 0.015991 MiB name: RG_ring_3_425543 00:05:20.340 end memzones------- 00:05:20.340 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.340 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:20.340 list of free elements. size: 12.519348 MiB 00:05:20.340 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:20.340 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:20.340 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:20.340 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:20.340 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:20.340 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:20.340 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:20.340 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:20.340 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:20.340 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:20.340 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:20.340 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:20.340 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:20.340 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:20.340 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:20.340 list of standard malloc elements. size: 199.218079 MiB 00:05:20.340 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:20.340 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:20.340 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.340 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:20.340 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.340 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.340 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:20.340 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.340 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:20.340 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:20.340 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:20.340 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:20.340 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:20.340 list of memzone associated elements. size: 602.262573 MiB 00:05:20.340 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:20.340 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.340 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:20.340 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.340 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:20.340 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_425543_0 00:05:20.340 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:20.340 associated memzone info: size: 48.002930 MiB name: MP_evtpool_425543_0 00:05:20.340 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:20.340 associated memzone info: size: 48.002930 MiB name: MP_msgpool_425543_0 00:05:20.340 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:20.340 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.340 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:20.340 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.340 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:20.340 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_425543 00:05:20.340 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:20.340 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_425543 00:05:20.340 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.340 associated memzone info: size: 1.007996 MiB name: MP_evtpool_425543 00:05:20.340 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:20.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.340 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:20.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.340 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:20.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.340 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:20.340 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.340 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:20.340 associated memzone info: size: 1.000366 MiB name: RG_ring_0_425543 00:05:20.340 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:20.340 associated memzone info: size: 1.000366 MiB name: RG_ring_1_425543 00:05:20.340 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:20.340 associated memzone info: size: 1.000366 MiB name: RG_ring_4_425543 00:05:20.340 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:20.340 associated memzone info: size: 1.000366 MiB name: RG_ring_5_425543 00:05:20.340 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:20.340 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_425543 00:05:20.340 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:20.340 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.340 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:20.340 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.340 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:20.340 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.340 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:20.340 associated memzone info: size: 0.125366 MiB name: RG_ring_2_425543 00:05:20.340 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:20.340 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.340 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:20.340 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.340 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:20.340 associated memzone info: size: 0.015991 MiB name: RG_ring_3_425543 00:05:20.340 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:20.340 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.340 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:20.340 associated memzone info: size: 0.000183 MiB name: MP_msgpool_425543 00:05:20.340 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:20.340 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_425543 00:05:20.340 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:20.340 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.340 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.340 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 425543 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 425543 ']' 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 425543 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 425543 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 425543' 00:05:20.340 killing process with pid 425543 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 425543 00:05:20.340 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 425543 00:05:20.601 00:05:20.601 real 0m1.269s 00:05:20.601 user 0m1.318s 00:05:20.601 sys 0m0.378s 00:05:20.601 12:09:26 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.601 12:09:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.601 ************************************ 00:05:20.601 END TEST dpdk_mem_utility 00:05:20.601 ************************************ 00:05:20.601 12:09:26 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.601 12:09:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.601 12:09:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.601 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:20.601 ************************************ 00:05:20.601 START TEST event 00:05:20.601 ************************************ 00:05:20.601 12:09:26 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.861 * Looking for test storage... 00:05:20.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.861 12:09:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.861 12:09:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.861 12:09:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.861 12:09:26 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:20.861 12:09:26 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.861 12:09:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.861 ************************************ 00:05:20.861 START TEST event_perf 00:05:20.861 ************************************ 00:05:20.861 12:09:26 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.861 Running I/O for 1 seconds...[2024-06-10 12:09:26.357244] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:20.861 [2024-06-10 12:09:26.357341] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425844 ] 00:05:20.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.861 [2024-06-10 12:09:26.427756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.121 [2024-06-10 12:09:26.496945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.121 [2024-06-10 12:09:26.497057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.121 [2024-06-10 12:09:26.497205] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.121 Running I/O for 1 seconds...[2024-06-10 12:09:26.497210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.060 00:05:22.060 lcore 0: 181155 00:05:22.060 lcore 1: 181153 00:05:22.060 lcore 2: 181151 00:05:22.060 lcore 3: 181154 00:05:22.060 done. 00:05:22.060 00:05:22.060 real 0m1.214s 00:05:22.060 user 0m4.135s 00:05:22.060 sys 0m0.075s 00:05:22.060 12:09:27 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.060 12:09:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.060 ************************************ 00:05:22.060 END TEST event_perf 00:05:22.060 ************************************ 00:05:22.060 12:09:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.060 12:09:27 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:22.060 12:09:27 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.060 12:09:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.060 ************************************ 00:05:22.060 START TEST event_reactor 00:05:22.060 ************************************ 00:05:22.060 12:09:27 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.060 [2024-06-10 12:09:27.642804] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:22.060 [2024-06-10 12:09:27.642908] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426202 ] 00:05:22.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.320 [2024-06-10 12:09:27.711399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.320 [2024-06-10 12:09:27.774699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.261 test_start 00:05:23.261 oneshot 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 500 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 test_end 00:05:23.261 00:05:23.261 real 0m1.206s 00:05:23.261 user 0m1.126s 00:05:23.261 sys 0m0.075s 00:05:23.261 12:09:28 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:23.261 12:09:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.261 ************************************ 00:05:23.261 END TEST event_reactor 00:05:23.261 ************************************ 00:05:23.261 12:09:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.261 12:09:28 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:23.261 12:09:28 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:23.261 12:09:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.521 ************************************ 00:05:23.521 START TEST event_reactor_perf 00:05:23.521 ************************************ 00:05:23.521 12:09:28 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.521 [2024-06-10 12:09:28.920650] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:23.521 [2024-06-10 12:09:28.920748] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426552 ] 00:05:23.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.521 [2024-06-10 12:09:28.989304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.521 [2024-06-10 12:09:29.053313] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.906 test_start 00:05:24.906 test_end 00:05:24.906 Performance: 367387 events per second 00:05:24.906 00:05:24.906 real 0m1.208s 00:05:24.906 user 0m1.130s 00:05:24.906 sys 0m0.074s 00:05:24.906 12:09:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.906 12:09:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.906 ************************************ 00:05:24.906 END TEST event_reactor_perf 00:05:24.906 ************************************ 00:05:24.906 12:09:30 event -- event/event.sh@49 -- # uname -s 00:05:24.906 12:09:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.906 12:09:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.906 12:09:30 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.906 12:09:30 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.906 12:09:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.906 ************************************ 00:05:24.906 START TEST event_scheduler 00:05:24.906 ************************************ 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.906 * Looking for test storage... 00:05:24.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.906 12:09:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.906 12:09:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=426897 00:05:24.906 12:09:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.906 12:09:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.906 12:09:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 426897 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 426897 ']' 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:24.906 12:09:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.906 [2024-06-10 12:09:30.336419] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:24.906 [2024-06-10 12:09:30.336471] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426897 ] 00:05:24.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.906 [2024-06-10 12:09:30.396310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.906 [2024-06-10 12:09:30.451212] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.906 [2024-06-10 12:09:30.451479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.906 [2024-06-10 12:09:30.451629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.906 [2024-06-10 12:09:30.451630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:25.849 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 POWER: Env isn't set yet! 00:05:25.849 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:25.849 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.849 POWER: Attempting to initialise PSTAT power management... 00:05:25.849 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:25.849 POWER: Initialized successfully for lcore 0 power management 00:05:25.849 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:25.849 POWER: Initialized successfully for lcore 1 power management 00:05:25.849 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:25.849 POWER: Initialized successfully for lcore 2 power management 00:05:25.849 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:25.849 POWER: Initialized successfully for lcore 3 power management 00:05:25.849 [2024-06-10 12:09:31.134580] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.849 [2024-06-10 12:09:31.134592] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.849 [2024-06-10 12:09:31.134598] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 [2024-06-10 12:09:31.195329] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 ************************************ 00:05:25.849 START TEST scheduler_create_thread 00:05:25.849 ************************************ 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 2 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 3 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 4 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 5 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 6 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 7 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 8 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.849 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.423 9 00:05:26.423 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.423 12:09:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.423 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.423 12:09:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.369 10 00:05:27.369 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.369 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.369 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.369 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.311 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.311 12:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.311 12:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.311 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.311 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.881 12:09:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.881 12:09:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.881 12:09:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.881 12:09:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.821 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.821 12:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:29.821 12:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:29.821 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.821 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.389 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.389 00:05:30.389 real 0m4.564s 00:05:30.389 user 0m0.024s 00:05:30.389 sys 0m0.007s 00:05:30.389 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.389 12:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.389 ************************************ 00:05:30.389 END TEST scheduler_create_thread 00:05:30.389 ************************************ 00:05:30.389 12:09:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:30.389 12:09:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 426897 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 426897 ']' 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 426897 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 426897 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 426897' 00:05:30.389 killing process with pid 426897 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 426897 00:05:30.389 12:09:35 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 426897 00:05:30.389 [2024-06-10 12:09:35.978043] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:30.648 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:30.648 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:30.648 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:30.648 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:30.648 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:30.648 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:30.648 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:30.648 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:30.648 00:05:30.648 real 0m5.986s 00:05:30.648 user 0m14.977s 00:05:30.648 sys 0m0.341s 00:05:30.648 12:09:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.648 12:09:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.648 ************************************ 00:05:30.648 END TEST event_scheduler 00:05:30.648 ************************************ 00:05:30.648 12:09:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:30.648 12:09:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:30.648 12:09:36 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.648 12:09:36 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.648 12:09:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.648 ************************************ 00:05:30.648 START TEST app_repeat 00:05:30.648 ************************************ 00:05:30.648 12:09:36 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:30.648 12:09:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=427997 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 427997' 00:05:30.908 Process app_repeat pid: 427997 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:30.908 spdk_app_start Round 0 00:05:30.908 12:09:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 427997 /var/tmp/spdk-nbd.sock 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 427997 ']' 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:30.908 12:09:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.908 [2024-06-10 12:09:36.289989] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:30.908 [2024-06-10 12:09:36.290052] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427997 ] 00:05:30.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.908 [2024-06-10 12:09:36.358399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.908 [2024-06-10 12:09:36.423619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.908 [2024-06-10 12:09:36.423620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.479 12:09:37 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:31.479 12:09:37 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:31.479 12:09:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.739 Malloc0 00:05:31.739 12:09:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.000 Malloc1 00:05:32.000 12:09:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.000 /dev/nbd0 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.000 1+0 records in 00:05:32.000 1+0 records out 00:05:32.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000126254 s, 32.4 MB/s 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:32.000 12:09:37 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.000 12:09:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.260 /dev/nbd1 00:05:32.260 12:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.260 12:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:32.260 12:09:37 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.260 1+0 records in 00:05:32.260 1+0 records out 00:05:32.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281528 s, 14.5 MB/s 00:05:32.261 12:09:37 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.261 12:09:37 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:32.261 12:09:37 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.261 12:09:37 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:32.261 12:09:37 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:32.261 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.261 12:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.261 12:09:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.261 12:09:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.261 12:09:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.521 { 00:05:32.521 "nbd_device": "/dev/nbd0", 00:05:32.521 "bdev_name": "Malloc0" 00:05:32.521 }, 00:05:32.521 { 00:05:32.521 "nbd_device": "/dev/nbd1", 00:05:32.521 "bdev_name": "Malloc1" 00:05:32.521 } 00:05:32.521 ]' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.521 { 00:05:32.521 "nbd_device": "/dev/nbd0", 00:05:32.521 "bdev_name": "Malloc0" 00:05:32.521 }, 00:05:32.521 { 00:05:32.521 "nbd_device": "/dev/nbd1", 00:05:32.521 "bdev_name": "Malloc1" 00:05:32.521 } 00:05:32.521 ]' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.521 /dev/nbd1' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.521 /dev/nbd1' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.521 256+0 records in 00:05:32.521 256+0 records out 00:05:32.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116762 s, 89.8 MB/s 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.521 256+0 records in 00:05:32.521 256+0 records out 00:05:32.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158128 s, 66.3 MB/s 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.521 12:09:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.521 256+0 records in 00:05:32.521 256+0 records out 00:05:32.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171324 s, 61.2 MB/s 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.521 12:09:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.781 12:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.041 12:09:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.041 12:09:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.301 12:09:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.301 [2024-06-10 12:09:38.853006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.611 [2024-06-10 12:09:38.917262] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.611 [2024-06-10 12:09:38.917263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.611 [2024-06-10 12:09:38.948527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.611 [2024-06-10 12:09:38.948563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.151 12:09:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.151 12:09:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:36.151 spdk_app_start Round 1 00:05:36.151 12:09:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 427997 /var/tmp/spdk-nbd.sock 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 427997 ']' 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.151 12:09:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.412 12:09:41 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:36.412 12:09:41 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:36.412 12:09:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.673 Malloc0 00:05:36.673 12:09:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.673 Malloc1 00:05:36.673 12:09:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.673 12:09:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.933 /dev/nbd0 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.933 1+0 records in 00:05:36.933 1+0 records out 00:05:36.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272861 s, 15.0 MB/s 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:36.933 12:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.933 /dev/nbd1 00:05:36.933 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.193 1+0 records in 00:05:37.193 1+0 records out 00:05:37.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230054 s, 17.8 MB/s 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:37.193 12:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.193 { 00:05:37.193 "nbd_device": "/dev/nbd0", 00:05:37.193 "bdev_name": "Malloc0" 00:05:37.193 }, 00:05:37.193 { 00:05:37.193 "nbd_device": "/dev/nbd1", 00:05:37.193 "bdev_name": "Malloc1" 00:05:37.193 } 00:05:37.193 ]' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.193 { 00:05:37.193 "nbd_device": "/dev/nbd0", 00:05:37.193 "bdev_name": "Malloc0" 00:05:37.193 }, 00:05:37.193 { 00:05:37.193 "nbd_device": "/dev/nbd1", 00:05:37.193 "bdev_name": "Malloc1" 00:05:37.193 } 00:05:37.193 ]' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.193 /dev/nbd1' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.193 /dev/nbd1' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.193 256+0 records in 00:05:37.193 256+0 records out 00:05:37.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118757 s, 88.3 MB/s 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.193 12:09:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.453 256+0 records in 00:05:37.453 256+0 records out 00:05:37.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156975 s, 66.8 MB/s 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.453 256+0 records in 00:05:37.453 256+0 records out 00:05:37.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182417 s, 57.5 MB/s 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.453 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.454 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.454 12:09:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.454 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.454 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.454 12:09:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.454 12:09:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.454 12:09:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.454 12:09:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.454 12:09:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.713 12:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.713 12:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.713 12:09:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.714 12:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.973 12:09:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.973 12:09:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.973 12:09:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.233 [2024-06-10 12:09:43.675244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.233 [2024-06-10 12:09:43.740177] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.233 [2024-06-10 12:09:43.740179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.233 [2024-06-10 12:09:43.772514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.233 [2024-06-10 12:09:43.772553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.528 12:09:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.528 12:09:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:41.528 spdk_app_start Round 2 00:05:41.528 12:09:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 427997 /var/tmp/spdk-nbd.sock 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 427997 ']' 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:41.528 12:09:46 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:41.528 12:09:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.528 Malloc0 00:05:41.528 12:09:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.528 Malloc1 00:05:41.528 12:09:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.528 12:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.789 /dev/nbd0 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.789 1+0 records in 00:05:41.789 1+0 records out 00:05:41.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292244 s, 14.0 MB/s 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.789 /dev/nbd1 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.789 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:41.789 12:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.790 1+0 records in 00:05:41.790 1+0 records out 00:05:41.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283804 s, 14.4 MB/s 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:41.790 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:41.790 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.790 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.790 12:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.790 12:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.790 12:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.050 { 00:05:42.050 "nbd_device": "/dev/nbd0", 00:05:42.050 "bdev_name": "Malloc0" 00:05:42.050 }, 00:05:42.050 { 00:05:42.050 "nbd_device": "/dev/nbd1", 00:05:42.050 "bdev_name": "Malloc1" 00:05:42.050 } 00:05:42.050 ]' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.050 { 00:05:42.050 "nbd_device": "/dev/nbd0", 00:05:42.050 "bdev_name": "Malloc0" 00:05:42.050 }, 00:05:42.050 { 00:05:42.050 "nbd_device": "/dev/nbd1", 00:05:42.050 "bdev_name": "Malloc1" 00:05:42.050 } 00:05:42.050 ]' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.050 /dev/nbd1' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.050 /dev/nbd1' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.050 256+0 records in 00:05:42.050 256+0 records out 00:05:42.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119799 s, 87.5 MB/s 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.050 256+0 records in 00:05:42.050 256+0 records out 00:05:42.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164864 s, 63.6 MB/s 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.050 256+0 records in 00:05:42.050 256+0 records out 00:05:42.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171208 s, 61.2 MB/s 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.050 12:09:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.051 12:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.051 12:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.311 12:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.571 12:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.571 12:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.571 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.571 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.831 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.831 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.831 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.831 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.831 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.832 12:09:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.832 12:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.832 12:09:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.832 12:09:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.832 12:09:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.832 12:09:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.092 [2024-06-10 12:09:48.492843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.092 [2024-06-10 12:09:48.557809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.092 [2024-06-10 12:09:48.557811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.092 [2024-06-10 12:09:48.589091] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.092 [2024-06-10 12:09:48.589128] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.394 12:09:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 427997 /var/tmp/spdk-nbd.sock 00:05:46.394 12:09:51 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 427997 ']' 00:05:46.394 12:09:51 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:46.395 12:09:51 event.app_repeat -- event/event.sh@39 -- # killprocess 427997 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 427997 ']' 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 427997 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 427997 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 427997' 00:05:46.395 killing process with pid 427997 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@968 -- # kill 427997 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@973 -- # wait 427997 00:05:46.395 spdk_app_start is called in Round 0. 00:05:46.395 Shutdown signal received, stop current app iteration 00:05:46.395 Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 reinitialization... 00:05:46.395 spdk_app_start is called in Round 1. 00:05:46.395 Shutdown signal received, stop current app iteration 00:05:46.395 Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 reinitialization... 00:05:46.395 spdk_app_start is called in Round 2. 00:05:46.395 Shutdown signal received, stop current app iteration 00:05:46.395 Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 reinitialization... 00:05:46.395 spdk_app_start is called in Round 3. 00:05:46.395 Shutdown signal received, stop current app iteration 00:05:46.395 12:09:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.395 12:09:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.395 00:05:46.395 real 0m15.430s 00:05:46.395 user 0m33.360s 00:05:46.395 sys 0m2.038s 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.395 12:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.395 ************************************ 00:05:46.395 END TEST app_repeat 00:05:46.395 ************************************ 00:05:46.395 12:09:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.395 12:09:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.395 12:09:51 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.395 12:09:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.395 12:09:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.395 ************************************ 00:05:46.395 START TEST cpu_locks 00:05:46.395 ************************************ 00:05:46.395 12:09:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.395 * Looking for test storage... 00:05:46.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.395 12:09:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.395 12:09:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.395 12:09:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.395 12:09:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.395 12:09:51 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.395 12:09:51 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.395 12:09:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.395 ************************************ 00:05:46.395 START TEST default_locks 00:05:46.395 ************************************ 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=431368 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 431368 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 431368 ']' 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.395 12:09:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.395 [2024-06-10 12:09:51.958270] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:46.395 [2024-06-10 12:09:51.958335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431368 ] 00:05:46.395 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.657 [2024-06-10 12:09:52.030219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.657 [2024-06-10 12:09:52.105269] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.227 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.227 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:47.227 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 431368 00:05:47.227 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 431368 00:05:47.227 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.798 lslocks: write error 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 431368 ']' 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 431368' 00:05:47.798 killing process with pid 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 431368 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 431368 ']' 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (431368) - No such process 00:05:47.798 ERROR: process (pid: 431368) is no longer running 00:05:47.798 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.799 00:05:47.799 real 0m1.490s 00:05:47.799 user 0m1.552s 00:05:47.799 sys 0m0.542s 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.799 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.799 ************************************ 00:05:47.799 END TEST default_locks 00:05:47.799 ************************************ 00:05:48.060 12:09:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.060 12:09:53 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.060 12:09:53 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.060 12:09:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.060 ************************************ 00:05:48.060 START TEST default_locks_via_rpc 00:05:48.060 ************************************ 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=431673 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 431673 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 431673 ']' 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.060 12:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.060 [2024-06-10 12:09:53.520834] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:48.060 [2024-06-10 12:09:53.520882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431673 ] 00:05:48.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.060 [2024-06-10 12:09:53.586306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.060 [2024-06-10 12:09:53.650829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 431673 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 431673 00:05:49.003 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 431673 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 431673 ']' 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 431673 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 431673 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 431673' 00:05:49.264 killing process with pid 431673 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 431673 00:05:49.264 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 431673 00:05:49.528 00:05:49.528 real 0m1.457s 00:05:49.528 user 0m1.550s 00:05:49.528 sys 0m0.495s 00:05:49.528 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.528 12:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.528 ************************************ 00:05:49.528 END TEST default_locks_via_rpc 00:05:49.528 ************************************ 00:05:49.528 12:09:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.528 12:09:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.528 12:09:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.528 12:09:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.528 ************************************ 00:05:49.528 START TEST non_locking_app_on_locked_coremask 00:05:49.528 ************************************ 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=431990 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 431990 /var/tmp/spdk.sock 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 431990 ']' 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:49.528 12:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.528 [2024-06-10 12:09:55.048428] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:49.528 [2024-06-10 12:09:55.048489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431990 ] 00:05:49.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.528 [2024-06-10 12:09:55.117329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.789 [2024-06-10 12:09:55.187023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=432314 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 432314 /var/tmp/spdk2.sock 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 432314 ']' 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.362 12:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 [2024-06-10 12:09:55.862140] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:50.362 [2024-06-10 12:09:55.862191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432314 ] 00:05:50.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.362 [2024-06-10 12:09:55.961598] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.362 [2024-06-10 12:09:55.961626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.623 [2024-06-10 12:09:56.090708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.195 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.195 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:51.195 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 431990 00:05:51.195 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 431990 00:05:51.195 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.457 lslocks: write error 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 431990 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 431990 ']' 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 431990 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 431990 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 431990' 00:05:51.457 killing process with pid 431990 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 431990 00:05:51.457 12:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 431990 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 432314 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 432314 ']' 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 432314 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 432314 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 432314' 00:05:52.072 killing process with pid 432314 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 432314 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 432314 00:05:52.072 00:05:52.072 real 0m2.625s 00:05:52.072 user 0m2.872s 00:05:52.072 sys 0m0.756s 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.072 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.072 ************************************ 00:05:52.072 END TEST non_locking_app_on_locked_coremask 00:05:52.072 ************************************ 00:05:52.072 12:09:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.072 12:09:57 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:52.072 12:09:57 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.072 12:09:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.334 ************************************ 00:05:52.334 START TEST locking_app_on_unlocked_coremask 00:05:52.334 ************************************ 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=432689 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 432689 /var/tmp/spdk.sock 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 432689 ']' 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:52.334 12:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.334 [2024-06-10 12:09:57.743916] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:52.334 [2024-06-10 12:09:57.743965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432689 ] 00:05:52.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.334 [2024-06-10 12:09:57.811125] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.334 [2024-06-10 12:09:57.811156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.334 [2024-06-10 12:09:57.880169] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=432765 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 432765 /var/tmp/spdk2.sock 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 432765 ']' 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:52.906 12:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.167 [2024-06-10 12:09:58.534833] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:53.167 [2024-06-10 12:09:58.534883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432765 ] 00:05:53.167 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.167 [2024-06-10 12:09:58.633689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.167 [2024-06-10 12:09:58.762717] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.740 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:53.740 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:53.740 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 432765 00:05:53.740 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 432765 00:05:53.740 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.313 lslocks: write error 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 432689 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 432689 ']' 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 432689 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 432689 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 432689' 00:05:54.313 killing process with pid 432689 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 432689 00:05:54.313 12:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 432689 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 432765 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 432765 ']' 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 432765 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 432765 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 432765' 00:05:54.886 killing process with pid 432765 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 432765 00:05:54.886 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 432765 00:05:55.146 00:05:55.146 real 0m2.868s 00:05:55.147 user 0m3.106s 00:05:55.147 sys 0m0.850s 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 ************************************ 00:05:55.147 END TEST locking_app_on_unlocked_coremask 00:05:55.147 ************************************ 00:05:55.147 12:10:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.147 12:10:00 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:55.147 12:10:00 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.147 12:10:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 ************************************ 00:05:55.147 START TEST locking_app_on_locked_coremask 00:05:55.147 ************************************ 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=433363 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 433363 /var/tmp/spdk.sock 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 433363 ']' 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.147 12:10:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 [2024-06-10 12:10:00.694991] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:55.147 [2024-06-10 12:10:00.695050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433363 ] 00:05:55.147 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.408 [2024-06-10 12:10:00.761521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.408 [2024-06-10 12:10:00.826404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=433412 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 433412 /var/tmp/spdk2.sock 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 433412 /var/tmp/spdk2.sock 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 433412 /var/tmp/spdk2.sock 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 433412 ']' 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.979 12:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.979 [2024-06-10 12:10:01.506891] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:55.979 [2024-06-10 12:10:01.506944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433412 ] 00:05:55.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.240 [2024-06-10 12:10:01.606765] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 433363 has claimed it. 00:05:56.240 [2024-06-10 12:10:01.606807] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (433412) - No such process 00:05:56.501 ERROR: process (pid: 433412) is no longer running 00:05:56.501 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.501 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:56.501 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:56.501 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:56.501 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:56.760 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:56.760 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 433363 00:05:56.760 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 433363 00:05:56.760 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.021 lslocks: write error 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 433363 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 433363 ']' 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 433363 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.021 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 433363 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 433363' 00:05:57.283 killing process with pid 433363 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 433363 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 433363 00:05:57.283 00:05:57.283 real 0m2.237s 00:05:57.283 user 0m2.481s 00:05:57.283 sys 0m0.635s 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.283 12:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.283 ************************************ 00:05:57.283 END TEST locking_app_on_locked_coremask 00:05:57.283 ************************************ 00:05:57.544 12:10:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.544 12:10:02 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.544 12:10:02 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.544 12:10:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.544 ************************************ 00:05:57.544 START TEST locking_overlapped_coremask 00:05:57.544 ************************************ 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=433773 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 433773 /var/tmp/spdk.sock 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 433773 ']' 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:57.544 12:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.544 [2024-06-10 12:10:02.995267] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:57.544 [2024-06-10 12:10:02.995321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433773 ] 00:05:57.544 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.544 [2024-06-10 12:10:03.064032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.544 [2024-06-10 12:10:03.136632] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.544 [2024-06-10 12:10:03.136753] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.544 [2024-06-10 12:10:03.136756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.487 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.487 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:58.487 12:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=433959 00:05:58.487 12:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 433959 /var/tmp/spdk2.sock 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 433959 /var/tmp/spdk2.sock 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 433959 /var/tmp/spdk2.sock 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 433959 ']' 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.488 12:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.488 [2024-06-10 12:10:03.819247] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:58.488 [2024-06-10 12:10:03.819300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433959 ] 00:05:58.488 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.488 [2024-06-10 12:10:03.899380] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 433773 has claimed it. 00:05:58.488 [2024-06-10 12:10:03.899410] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (433959) - No such process 00:05:59.058 ERROR: process (pid: 433959) is no longer running 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.058 12:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 433773 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 433773 ']' 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 433773 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 433773 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 433773' 00:05:59.059 killing process with pid 433773 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 433773 00:05:59.059 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 433773 00:05:59.320 00:05:59.320 real 0m1.751s 00:05:59.320 user 0m4.945s 00:05:59.320 sys 0m0.365s 00:05:59.320 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.320 12:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.320 ************************************ 00:05:59.320 END TEST locking_overlapped_coremask 00:05:59.320 ************************************ 00:05:59.320 12:10:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.320 12:10:04 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.320 12:10:04 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.320 12:10:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.320 ************************************ 00:05:59.321 START TEST locking_overlapped_coremask_via_rpc 00:05:59.321 ************************************ 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=434151 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 434151 /var/tmp/spdk.sock 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 434151 ']' 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.321 12:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.321 [2024-06-10 12:10:04.819583] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:59.321 [2024-06-10 12:10:04.819636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434151 ] 00:05:59.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.321 [2024-06-10 12:10:04.888998] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.321 [2024-06-10 12:10:04.889030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.581 [2024-06-10 12:10:04.965083] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.581 [2024-06-10 12:10:04.965223] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.581 [2024-06-10 12:10:04.965249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=434400 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 434400 /var/tmp/spdk2.sock 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 434400 ']' 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.152 12:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.152 [2024-06-10 12:10:05.635344] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:00.152 [2024-06-10 12:10:05.635397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434400 ] 00:06:00.152 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.152 [2024-06-10 12:10:05.717118] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.152 [2024-06-10 12:10:05.717144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.412 [2024-06-10 12:10:05.822770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.412 [2024-06-10 12:10:05.826318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.412 [2024-06-10 12:10:05.826320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.983 [2024-06-10 12:10:06.413260] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 434151 has claimed it. 00:06:00.983 request: 00:06:00.983 { 00:06:00.983 "method": "framework_enable_cpumask_locks", 00:06:00.983 "req_id": 1 00:06:00.983 } 00:06:00.983 Got JSON-RPC error response 00:06:00.983 response: 00:06:00.983 { 00:06:00.983 "code": -32603, 00:06:00.983 "message": "Failed to claim CPU core: 2" 00:06:00.983 } 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 434151 /var/tmp/spdk.sock 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 434151 ']' 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.983 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 434400 /var/tmp/spdk2.sock 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 434400 ']' 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.244 00:06:01.244 real 0m1.984s 00:06:01.244 user 0m0.739s 00:06:01.244 sys 0m0.167s 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.244 12:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.244 ************************************ 00:06:01.244 END TEST locking_overlapped_coremask_via_rpc 00:06:01.244 ************************************ 00:06:01.244 12:10:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.244 12:10:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 434151 ]] 00:06:01.244 12:10:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 434151 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 434151 ']' 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 434151 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 434151 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 434151' 00:06:01.244 killing process with pid 434151 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 434151 00:06:01.244 12:10:06 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 434151 00:06:01.505 12:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 434400 ]] 00:06:01.505 12:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 434400 00:06:01.505 12:10:07 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 434400 ']' 00:06:01.505 12:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 434400 00:06:01.505 12:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:01.505 12:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:01.505 12:10:07 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 434400 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 434400' 00:06:01.766 killing process with pid 434400 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 434400 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 434400 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 434151 ]] 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 434151 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 434151 ']' 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 434151 00:06:01.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (434151) - No such process 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 434151 is not found' 00:06:01.766 Process with pid 434151 is not found 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 434400 ]] 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 434400 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 434400 ']' 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 434400 00:06:01.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (434400) - No such process 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 434400 is not found' 00:06:01.766 Process with pid 434400 is not found 00:06:01.766 12:10:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.766 00:06:01.766 real 0m15.547s 00:06:01.766 user 0m26.675s 00:06:01.766 sys 0m4.674s 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.766 12:10:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.766 ************************************ 00:06:01.766 END TEST cpu_locks 00:06:01.766 ************************************ 00:06:01.766 00:06:01.766 real 0m41.140s 00:06:01.766 user 1m21.602s 00:06:01.766 sys 0m7.649s 00:06:01.766 12:10:07 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.766 12:10:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.766 ************************************ 00:06:01.766 END TEST event 00:06:01.766 ************************************ 00:06:02.028 12:10:07 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.028 12:10:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.028 12:10:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.028 12:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.028 ************************************ 00:06:02.028 START TEST thread 00:06:02.028 ************************************ 00:06:02.028 12:10:07 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.028 * Looking for test storage... 00:06:02.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.028 12:10:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.028 12:10:07 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:02.028 12:10:07 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.028 12:10:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.028 ************************************ 00:06:02.028 START TEST thread_poller_perf 00:06:02.028 ************************************ 00:06:02.028 12:10:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.028 [2024-06-10 12:10:07.578425] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:02.028 [2024-06-10 12:10:07.578529] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434923 ] 00:06:02.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.288 [2024-06-10 12:10:07.655626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.288 [2024-06-10 12:10:07.729669] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.288 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.232 ====================================== 00:06:03.232 busy:2409336068 (cyc) 00:06:03.232 total_run_count: 288000 00:06:03.232 tsc_hz: 2400000000 (cyc) 00:06:03.232 ====================================== 00:06:03.232 poller_cost: 8365 (cyc), 3485 (nsec) 00:06:03.232 00:06:03.232 real 0m1.237s 00:06:03.232 user 0m1.157s 00:06:03.232 sys 0m0.076s 00:06:03.232 12:10:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.232 12:10:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.232 ************************************ 00:06:03.232 END TEST thread_poller_perf 00:06:03.232 ************************************ 00:06:03.232 12:10:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.232 12:10:08 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:03.232 12:10:08 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.232 12:10:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.493 ************************************ 00:06:03.493 START TEST thread_poller_perf 00:06:03.493 ************************************ 00:06:03.493 12:10:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.493 [2024-06-10 12:10:08.886679] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:03.493 [2024-06-10 12:10:08.886764] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435153 ] 00:06:03.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.493 [2024-06-10 12:10:08.958642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.493 [2024-06-10 12:10:09.028180] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.493 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.876 ====================================== 00:06:04.876 busy:2401779532 (cyc) 00:06:04.876 total_run_count: 3813000 00:06:04.876 tsc_hz: 2400000000 (cyc) 00:06:04.877 ====================================== 00:06:04.877 poller_cost: 629 (cyc), 262 (nsec) 00:06:04.877 00:06:04.877 real 0m1.218s 00:06:04.877 user 0m1.136s 00:06:04.877 sys 0m0.078s 00:06:04.877 12:10:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.877 12:10:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.877 ************************************ 00:06:04.877 END TEST thread_poller_perf 00:06:04.877 ************************************ 00:06:04.877 12:10:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.877 00:06:04.877 real 0m2.703s 00:06:04.877 user 0m2.397s 00:06:04.877 sys 0m0.313s 00:06:04.877 12:10:10 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.877 12:10:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.877 ************************************ 00:06:04.877 END TEST thread 00:06:04.877 ************************************ 00:06:04.877 12:10:10 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.877 12:10:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.877 12:10:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.877 12:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.877 ************************************ 00:06:04.877 START TEST accel 00:06:04.877 ************************************ 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.877 * Looking for test storage... 00:06:04.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:04.877 12:10:10 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:04.877 12:10:10 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:04.877 12:10:10 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.877 12:10:10 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=435415 00:06:04.877 12:10:10 accel -- accel/accel.sh@63 -- # waitforlisten 435415 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@830 -- # '[' -z 435415 ']' 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.877 12:10:10 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.877 12:10:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.877 12:10:10 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:04.877 12:10:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.877 12:10:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.877 12:10:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.877 12:10:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.877 12:10:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.877 12:10:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:04.877 12:10:10 accel -- accel/accel.sh@41 -- # jq -r . 00:06:04.877 [2024-06-10 12:10:10.354879] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:04.877 [2024-06-10 12:10:10.354951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435415 ] 00:06:04.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.877 [2024-06-10 12:10:10.443530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.137 [2024-06-10 12:10:10.517835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@863 -- # return 0 00:06:05.710 12:10:11 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:05.710 12:10:11 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:05.710 12:10:11 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:05.710 12:10:11 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:05.710 12:10:11 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:05.710 12:10:11 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.710 12:10:11 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.710 12:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.710 12:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.710 12:10:11 accel -- accel/accel.sh@75 -- # killprocess 435415 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@949 -- # '[' -z 435415 ']' 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@953 -- # kill -0 435415 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@954 -- # uname 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 435415 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 435415' 00:06:05.710 killing process with pid 435415 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@968 -- # kill 435415 00:06:05.710 12:10:11 accel -- common/autotest_common.sh@973 -- # wait 435415 00:06:05.971 12:10:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:05.971 12:10:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.971 12:10:11 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:05.971 12:10:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:05.971 12:10:11 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.971 12:10:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:05.971 12:10:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.971 12:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.232 ************************************ 00:06:06.232 START TEST accel_missing_filename 00:06:06.232 ************************************ 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.232 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:06.232 12:10:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:06.232 [2024-06-10 12:10:11.610292] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:06.232 [2024-06-10 12:10:11.610393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435718 ] 00:06:06.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.232 [2024-06-10 12:10:11.685631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.232 [2024-06-10 12:10:11.759496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.232 [2024-06-10 12:10:11.791688] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.232 [2024-06-10 12:10:11.828600] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:06.494 A filename is required. 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:06.494 00:06:06.494 real 0m0.304s 00:06:06.494 user 0m0.226s 00:06:06.494 sys 0m0.118s 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.494 12:10:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:06.494 ************************************ 00:06:06.494 END TEST accel_missing_filename 00:06:06.494 ************************************ 00:06:06.494 12:10:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.494 12:10:11 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:06.494 12:10:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.494 12:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.494 ************************************ 00:06:06.494 START TEST accel_compress_verify 00:06:06.494 ************************************ 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.494 12:10:11 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:06.494 12:10:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:06.494 [2024-06-10 12:10:11.988960] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:06.494 [2024-06-10 12:10:11.989061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435795 ] 00:06:06.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.494 [2024-06-10 12:10:12.058984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.757 [2024-06-10 12:10:12.131011] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.757 [2024-06-10 12:10:12.163097] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.757 [2024-06-10 12:10:12.199949] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:06.757 00:06:06.757 Compression does not support the verify option, aborting. 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:06.757 00:06:06.757 real 0m0.297s 00:06:06.757 user 0m0.227s 00:06:06.757 sys 0m0.111s 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.757 12:10:12 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:06.757 ************************************ 00:06:06.757 END TEST accel_compress_verify 00:06:06.757 ************************************ 00:06:06.757 12:10:12 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:06.757 12:10:12 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:06.757 12:10:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.757 12:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.757 ************************************ 00:06:06.757 START TEST accel_wrong_workload 00:06:06.757 ************************************ 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:06.757 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.758 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:06.758 12:10:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:06.758 Unsupported workload type: foobar 00:06:06.758 [2024-06-10 12:10:12.357704] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:07.047 accel_perf options: 00:06:07.047 [-h help message] 00:06:07.047 [-q queue depth per core] 00:06:07.047 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.047 [-T number of threads per core 00:06:07.047 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.047 [-t time in seconds] 00:06:07.047 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.047 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.047 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.047 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.047 [-S for crc32c workload, use this seed value (default 0) 00:06:07.047 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.047 [-f for fill workload, use this BYTE value (default 255) 00:06:07.047 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.047 [-y verify result if this switch is on] 00:06:07.047 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.047 Can be used to spread operations across a wider range of memory. 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.047 00:06:07.047 real 0m0.035s 00:06:07.047 user 0m0.021s 00:06:07.047 sys 0m0.014s 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.047 12:10:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:07.047 ************************************ 00:06:07.047 END TEST accel_wrong_workload 00:06:07.047 ************************************ 00:06:07.047 Error: writing output failed: Broken pipe 00:06:07.047 12:10:12 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.047 ************************************ 00:06:07.047 START TEST accel_negative_buffers 00:06:07.047 ************************************ 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:07.047 12:10:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:07.047 -x option must be non-negative. 00:06:07.047 [2024-06-10 12:10:12.471469] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:07.047 accel_perf options: 00:06:07.047 [-h help message] 00:06:07.047 [-q queue depth per core] 00:06:07.047 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.047 [-T number of threads per core 00:06:07.047 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.047 [-t time in seconds] 00:06:07.047 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.047 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.047 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.047 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.047 [-S for crc32c workload, use this seed value (default 0) 00:06:07.047 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.047 [-f for fill workload, use this BYTE value (default 255) 00:06:07.047 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.047 [-y verify result if this switch is on] 00:06:07.047 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.047 Can be used to spread operations across a wider range of memory. 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.047 00:06:07.047 real 0m0.041s 00:06:07.047 user 0m0.024s 00:06:07.047 sys 0m0.017s 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.047 12:10:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:07.047 ************************************ 00:06:07.047 END TEST accel_negative_buffers 00:06:07.047 ************************************ 00:06:07.047 Error: writing output failed: Broken pipe 00:06:07.047 12:10:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.047 12:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.047 ************************************ 00:06:07.047 START TEST accel_crc32c 00:06:07.047 ************************************ 00:06:07.047 12:10:12 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:07.047 12:10:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:07.048 12:10:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:07.048 [2024-06-10 12:10:12.585052] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:07.048 [2024-06-10 12:10:12.585111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436121 ] 00:06:07.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.312 [2024-06-10 12:10:12.652713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.312 [2024-06-10 12:10:12.717257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.312 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.313 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.313 12:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.313 12:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.313 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.313 12:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.252 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.253 12:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.253 00:06:08.253 real 0m1.291s 00:06:08.253 user 0m1.203s 00:06:08.253 sys 0m0.099s 00:06:08.253 12:10:13 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.253 12:10:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 ************************************ 00:06:08.253 END TEST accel_crc32c 00:06:08.253 ************************************ 00:06:08.512 12:10:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:08.512 12:10:13 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:08.512 12:10:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.512 12:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.512 ************************************ 00:06:08.512 START TEST accel_crc32c_C2 00:06:08.512 ************************************ 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.512 12:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:08.512 [2024-06-10 12:10:13.949381] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:08.512 [2024-06-10 12:10:13.949462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436333 ] 00:06:08.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.512 [2024-06-10 12:10:14.020306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.512 [2024-06-10 12:10:14.093827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.772 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.772 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.772 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.772 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.772 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 12:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.713 00:06:09.713 real 0m1.302s 00:06:09.713 user 0m1.205s 00:06:09.713 sys 0m0.109s 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.713 12:10:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 ************************************ 00:06:09.713 END TEST accel_crc32c_C2 00:06:09.713 ************************************ 00:06:09.713 12:10:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:09.713 12:10:15 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:09.713 12:10:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.713 12:10:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 ************************************ 00:06:09.713 START TEST accel_copy 00:06:09.713 ************************************ 00:06:09.713 12:10:15 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:09.713 12:10:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:09.973 [2024-06-10 12:10:15.327346] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:09.973 [2024-06-10 12:10:15.327448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436530 ] 00:06:09.973 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.973 [2024-06-10 12:10:15.397785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.973 [2024-06-10 12:10:15.471721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.973 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.974 12:10:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:11.352 12:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.352 00:06:11.352 real 0m1.304s 00:06:11.352 user 0m1.206s 00:06:11.352 sys 0m0.108s 00:06:11.352 12:10:16 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.352 12:10:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.352 ************************************ 00:06:11.352 END TEST accel_copy 00:06:11.352 ************************************ 00:06:11.352 12:10:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.352 12:10:16 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:11.352 12:10:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.352 12:10:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.352 ************************************ 00:06:11.352 START TEST accel_fill 00:06:11.352 ************************************ 00:06:11.352 12:10:16 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:11.352 [2024-06-10 12:10:16.701862] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:11.352 [2024-06-10 12:10:16.701923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436864 ] 00:06:11.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.352 [2024-06-10 12:10:16.770399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.352 [2024-06-10 12:10:16.838733] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.352 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.353 12:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:12.734 12:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.734 00:06:12.734 real 0m1.293s 00:06:12.734 user 0m1.199s 00:06:12.734 sys 0m0.105s 00:06:12.734 12:10:17 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.734 12:10:17 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:12.734 ************************************ 00:06:12.734 END TEST accel_fill 00:06:12.734 ************************************ 00:06:12.734 12:10:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:12.734 12:10:18 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:12.734 12:10:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.734 12:10:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.734 ************************************ 00:06:12.734 START TEST accel_copy_crc32c 00:06:12.734 ************************************ 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:12.734 [2024-06-10 12:10:18.070075] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:12.734 [2024-06-10 12:10:18.070159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437211 ] 00:06:12.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.734 [2024-06-10 12:10:18.137375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.734 [2024-06-10 12:10:18.202913] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.734 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.735 12:10:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.114 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.115 00:06:14.115 real 0m1.291s 00:06:14.115 user 0m1.200s 00:06:14.115 sys 0m0.102s 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.115 12:10:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:14.115 ************************************ 00:06:14.115 END TEST accel_copy_crc32c 00:06:14.115 ************************************ 00:06:14.115 12:10:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:14.115 12:10:19 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:14.115 12:10:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.115 12:10:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.115 ************************************ 00:06:14.115 START TEST accel_copy_crc32c_C2 00:06:14.115 ************************************ 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:14.115 [2024-06-10 12:10:19.433674] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:14.115 [2024-06-10 12:10:19.433735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437563 ] 00:06:14.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.115 [2024-06-10 12:10:19.500784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.115 [2024-06-10 12:10:19.566409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.115 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.116 12:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.497 00:06:15.497 real 0m1.290s 00:06:15.497 user 0m1.197s 00:06:15.497 sys 0m0.104s 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.497 12:10:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.497 ************************************ 00:06:15.497 END TEST accel_copy_crc32c_C2 00:06:15.497 ************************************ 00:06:15.497 12:10:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:15.497 12:10:20 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:15.497 12:10:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.497 12:10:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.497 ************************************ 00:06:15.497 START TEST accel_dualcast 00:06:15.497 ************************************ 00:06:15.497 12:10:20 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:15.497 [2024-06-10 12:10:20.799532] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:15.497 [2024-06-10 12:10:20.799624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437819 ] 00:06:15.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.497 [2024-06-10 12:10:20.869731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.497 [2024-06-10 12:10:20.938347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.497 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.498 12:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.880 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.881 12:10:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.881 00:06:16.881 real 0m1.296s 00:06:16.881 user 0m1.202s 00:06:16.881 sys 0m0.105s 00:06:16.881 12:10:22 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:16.881 12:10:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:16.881 ************************************ 00:06:16.881 END TEST accel_dualcast 00:06:16.881 ************************************ 00:06:16.881 12:10:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.881 12:10:22 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:16.881 12:10:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.881 12:10:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.881 ************************************ 00:06:16.881 START TEST accel_compare 00:06:16.881 ************************************ 00:06:16.881 12:10:22 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:16.881 [2024-06-10 12:10:22.170897] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:16.881 [2024-06-10 12:10:22.170989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438001 ] 00:06:16.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.881 [2024-06-10 12:10:22.239766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.881 [2024-06-10 12:10:22.306006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.881 12:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:18.263 12:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.263 00:06:18.263 real 0m1.293s 00:06:18.263 user 0m1.197s 00:06:18.263 sys 0m0.108s 00:06:18.263 12:10:23 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.263 12:10:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:18.263 ************************************ 00:06:18.263 END TEST accel_compare 00:06:18.263 ************************************ 00:06:18.263 12:10:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.263 12:10:23 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:18.263 12:10:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.263 12:10:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.263 ************************************ 00:06:18.263 START TEST accel_xor 00:06:18.263 ************************************ 00:06:18.263 12:10:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:18.263 [2024-06-10 12:10:23.538732] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:18.263 [2024-06-10 12:10:23.538791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438299 ] 00:06:18.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.263 [2024-06-10 12:10:23.606255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.263 [2024-06-10 12:10:23.669623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.263 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.264 12:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.207 12:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.207 00:06:19.207 real 0m1.289s 00:06:19.207 user 0m1.193s 00:06:19.207 sys 0m0.107s 00:06:19.207 12:10:24 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.207 12:10:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:19.207 ************************************ 00:06:19.207 END TEST accel_xor 00:06:19.207 ************************************ 00:06:19.470 12:10:24 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.470 12:10:24 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:19.470 12:10:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.470 12:10:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.470 ************************************ 00:06:19.470 START TEST accel_xor 00:06:19.470 ************************************ 00:06:19.470 12:10:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:19.470 12:10:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:19.470 [2024-06-10 12:10:24.903612] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:19.470 [2024-06-10 12:10:24.903685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438655 ] 00:06:19.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.470 [2024-06-10 12:10:24.980756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.470 [2024-06-10 12:10:25.044624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.732 12:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:20.674 12:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.674 00:06:20.674 real 0m1.299s 00:06:20.674 user 0m1.202s 00:06:20.674 sys 0m0.108s 00:06:20.674 12:10:26 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.674 12:10:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 END TEST accel_xor 00:06:20.674 ************************************ 00:06:20.674 12:10:26 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:20.674 12:10:26 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:20.674 12:10:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.674 12:10:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 START TEST accel_dif_verify 00:06:20.674 ************************************ 00:06:20.674 12:10:26 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:20.674 12:10:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:20.674 [2024-06-10 12:10:26.279420] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:20.934 [2024-06-10 12:10:26.279535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439004 ] 00:06:20.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.934 [2024-06-10 12:10:26.352847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.934 [2024-06-10 12:10:26.420426] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.934 12:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:22.319 12:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.319 00:06:22.319 real 0m1.303s 00:06:22.319 user 0m1.201s 00:06:22.319 sys 0m0.114s 00:06:22.319 12:10:27 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.319 12:10:27 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.319 ************************************ 00:06:22.319 END TEST accel_dif_verify 00:06:22.319 ************************************ 00:06:22.319 12:10:27 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:22.319 12:10:27 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:22.319 12:10:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.319 12:10:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.319 ************************************ 00:06:22.319 START TEST accel_dif_generate 00:06:22.319 ************************************ 00:06:22.319 12:10:27 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:22.319 [2024-06-10 12:10:27.652848] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:22.319 [2024-06-10 12:10:27.652923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439302 ] 00:06:22.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.319 [2024-06-10 12:10:27.722950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.319 [2024-06-10 12:10:27.794468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.319 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.320 12:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:23.707 12:10:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.707 00:06:23.707 real 0m1.299s 00:06:23.707 user 0m1.198s 00:06:23.707 sys 0m0.114s 00:06:23.707 12:10:28 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.707 12:10:28 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:23.707 ************************************ 00:06:23.707 END TEST accel_dif_generate 00:06:23.707 ************************************ 00:06:23.707 12:10:28 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:23.707 12:10:28 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:23.707 12:10:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.707 12:10:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.707 ************************************ 00:06:23.707 START TEST accel_dif_generate_copy 00:06:23.707 ************************************ 00:06:23.707 12:10:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:23.707 12:10:28 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:23.707 12:10:28 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:23.707 12:10:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:28 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:23.707 [2024-06-10 12:10:29.023205] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:23.707 [2024-06-10 12:10:29.023267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439502 ] 00:06:23.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.707 [2024-06-10 12:10:29.092051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.707 [2024-06-10 12:10:29.163385] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.707 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.152 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.153 00:06:25.153 real 0m1.296s 00:06:25.153 user 0m1.203s 00:06:25.153 sys 0m0.104s 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.153 12:10:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:25.153 ************************************ 00:06:25.153 END TEST accel_dif_generate_copy 00:06:25.153 ************************************ 00:06:25.153 12:10:30 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:25.153 12:10:30 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.153 12:10:30 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:25.153 12:10:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.153 12:10:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.153 ************************************ 00:06:25.153 START TEST accel_comp 00:06:25.153 ************************************ 00:06:25.153 12:10:30 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:25.153 [2024-06-10 12:10:30.393552] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:25.153 [2024-06-10 12:10:30.393613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439745 ] 00:06:25.153 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.153 [2024-06-10 12:10:30.481014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.153 [2024-06-10 12:10:30.556255] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.153 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.154 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.154 12:10:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.154 12:10:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.154 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.154 12:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:26.093 12:10:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.093 00:06:26.093 real 0m1.321s 00:06:26.093 user 0m1.207s 00:06:26.093 sys 0m0.126s 00:06:26.093 12:10:31 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.094 12:10:31 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:26.094 ************************************ 00:06:26.094 END TEST accel_comp 00:06:26.094 ************************************ 00:06:26.355 12:10:31 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.355 12:10:31 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:26.355 12:10:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.355 12:10:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 ************************************ 00:06:26.355 START TEST accel_decomp 00:06:26.355 ************************************ 00:06:26.355 12:10:31 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:26.355 [2024-06-10 12:10:31.793435] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:26.355 [2024-06-10 12:10:31.793526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440094 ] 00:06:26.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.355 [2024-06-10 12:10:31.861305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.355 [2024-06-10 12:10:31.924968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.355 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.616 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.617 12:10:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.558 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.558 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.558 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.559 12:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.559 00:06:27.559 real 0m1.293s 00:06:27.559 user 0m1.203s 00:06:27.559 sys 0m0.101s 00:06:27.559 12:10:33 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.559 12:10:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:27.559 ************************************ 00:06:27.559 END TEST accel_decomp 00:06:27.559 ************************************ 00:06:27.559 12:10:33 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.559 12:10:33 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:27.559 12:10:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.559 12:10:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.559 ************************************ 00:06:27.559 START TEST accel_decomp_full 00:06:27.559 ************************************ 00:06:27.559 12:10:33 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:27.559 12:10:33 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:27.559 [2024-06-10 12:10:33.159527] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:27.559 [2024-06-10 12:10:33.159618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440447 ] 00:06:27.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.821 [2024-06-10 12:10:33.227595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.821 [2024-06-10 12:10:33.295152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.821 12:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.207 12:10:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.207 00:06:29.207 real 0m1.307s 00:06:29.207 user 0m1.211s 00:06:29.207 sys 0m0.108s 00:06:29.207 12:10:34 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.207 12:10:34 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:29.207 ************************************ 00:06:29.207 END TEST accel_decomp_full 00:06:29.207 ************************************ 00:06:29.207 12:10:34 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.207 12:10:34 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:29.207 12:10:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.207 12:10:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.207 ************************************ 00:06:29.207 START TEST accel_decomp_mcore 00:06:29.207 ************************************ 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.207 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:29.208 [2024-06-10 12:10:34.539190] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:29.208 [2024-06-10 12:10:34.539287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440798 ] 00:06:29.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.208 [2024-06-10 12:10:34.607727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.208 [2024-06-10 12:10:34.674878] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.208 [2024-06-10 12:10:34.674996] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.208 [2024-06-10 12:10:34.675156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.208 [2024-06-10 12:10:34.675156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.208 12:10:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.596 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.597 00:06:30.597 real 0m1.304s 00:06:30.597 user 0m4.439s 00:06:30.597 sys 0m0.111s 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.597 12:10:35 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:30.597 ************************************ 00:06:30.597 END TEST accel_decomp_mcore 00:06:30.597 ************************************ 00:06:30.597 12:10:35 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.597 12:10:35 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:30.597 12:10:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.597 12:10:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.597 ************************************ 00:06:30.597 START TEST accel_decomp_full_mcore 00:06:30.597 ************************************ 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:30.597 12:10:35 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:30.597 [2024-06-10 12:10:35.901925] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:30.597 [2024-06-10 12:10:35.901992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441024 ] 00:06:30.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.597 [2024-06-10 12:10:35.970653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.597 [2024-06-10 12:10:36.039165] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.597 [2024-06-10 12:10:36.039300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.597 [2024-06-10 12:10:36.039359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.597 [2024-06-10 12:10:36.039359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.597 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.598 12:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.982 00:06:31.982 real 0m1.316s 00:06:31.982 user 0m4.481s 00:06:31.982 sys 0m0.118s 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.982 12:10:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 ************************************ 00:06:31.982 END TEST accel_decomp_full_mcore 00:06:31.982 ************************************ 00:06:31.982 12:10:37 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.982 12:10:37 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:31.982 12:10:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.982 12:10:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.982 ************************************ 00:06:31.982 START TEST accel_decomp_mthread 00:06:31.982 ************************************ 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.982 [2024-06-10 12:10:37.292827] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:31.982 [2024-06-10 12:10:37.292919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441224 ] 00:06:31.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.982 [2024-06-10 12:10:37.362347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.982 [2024-06-10 12:10:37.431623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.982 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.983 12:10:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.369 00:06:33.369 real 0m1.304s 00:06:33.369 user 0m1.212s 00:06:33.369 sys 0m0.105s 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.369 12:10:38 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:33.369 ************************************ 00:06:33.369 END TEST accel_decomp_mthread 00:06:33.369 ************************************ 00:06:33.369 12:10:38 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.369 12:10:38 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:33.369 12:10:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.369 12:10:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.369 ************************************ 00:06:33.369 START TEST accel_decomp_full_mthread 00:06:33.369 ************************************ 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:33.369 [2024-06-10 12:10:38.672993] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:33.369 [2024-06-10 12:10:38.673074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441541 ] 00:06:33.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.369 [2024-06-10 12:10:38.740563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.369 [2024-06-10 12:10:38.804398] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:33.369 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.370 12:10:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.756 00:06:34.756 real 0m1.318s 00:06:34.756 user 0m1.216s 00:06:34.756 sys 0m0.113s 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.756 12:10:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:34.756 ************************************ 00:06:34.756 END TEST accel_decomp_full_mthread 00:06:34.756 ************************************ 00:06:34.756 12:10:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:34.756 12:10:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.756 12:10:40 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:34.756 12:10:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:34.756 12:10:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.756 12:10:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.756 12:10:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.756 12:10:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.756 12:10:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.756 12:10:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.756 12:10:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.756 12:10:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:34.756 12:10:40 accel -- accel/accel.sh@41 -- # jq -r . 00:06:34.756 ************************************ 00:06:34.756 START TEST accel_dif_functional_tests 00:06:34.756 ************************************ 00:06:34.756 12:10:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.756 [2024-06-10 12:10:40.087400] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:34.756 [2024-06-10 12:10:40.087456] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441900 ] 00:06:34.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.757 [2024-06-10 12:10:40.153863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.757 [2024-06-10 12:10:40.220323] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.757 [2024-06-10 12:10:40.220542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.757 [2024-06-10 12:10:40.220546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.757 00:06:34.757 00:06:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.757 http://cunit.sourceforge.net/ 00:06:34.757 00:06:34.757 00:06:34.757 Suite: accel_dif 00:06:34.757 Test: verify: DIF generated, GUARD check ...passed 00:06:34.757 Test: verify: DIF generated, APPTAG check ...passed 00:06:34.757 Test: verify: DIF generated, REFTAG check ...passed 00:06:34.757 Test: verify: DIF not generated, GUARD check ...[2024-06-10 12:10:40.275696] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.757 passed 00:06:34.757 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 12:10:40.275740] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.757 passed 00:06:34.757 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 12:10:40.275760] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.757 passed 00:06:34.757 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:34.757 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 12:10:40.275807] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:34.757 passed 00:06:34.757 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:34.757 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:34.757 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:34.757 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 12:10:40.275919] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:34.757 passed 00:06:34.757 Test: verify copy: DIF generated, GUARD check ...passed 00:06:34.757 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:34.757 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:34.757 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 12:10:40.276040] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.757 passed 00:06:34.757 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 12:10:40.276063] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.757 passed 00:06:34.757 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 12:10:40.276084] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.757 passed 00:06:34.757 Test: generate copy: DIF generated, GUARD check ...passed 00:06:34.757 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:34.757 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:34.757 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:34.757 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:34.757 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:34.757 Test: generate copy: iovecs-len validate ...[2024-06-10 12:10:40.276279] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.757 passed 00:06:34.757 Test: generate copy: buffer alignment validate ...passed 00:06:34.757 00:06:34.757 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.757 suites 1 1 n/a 0 0 00:06:34.757 tests 26 26 26 0 0 00:06:34.757 asserts 115 115 115 0 n/a 00:06:34.757 00:06:34.757 Elapsed time = 0.002 seconds 00:06:35.019 00:06:35.019 real 0m0.355s 00:06:35.019 user 0m0.487s 00:06:35.019 sys 0m0.131s 00:06:35.019 12:10:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.019 12:10:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 ************************************ 00:06:35.019 END TEST accel_dif_functional_tests 00:06:35.019 ************************************ 00:06:35.019 00:06:35.019 real 0m30.237s 00:06:35.019 user 0m33.707s 00:06:35.019 sys 0m4.246s 00:06:35.019 12:10:40 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.019 12:10:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 ************************************ 00:06:35.019 END TEST accel 00:06:35.019 ************************************ 00:06:35.019 12:10:40 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:35.019 12:10:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:35.019 12:10:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.019 12:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 ************************************ 00:06:35.019 START TEST accel_rpc 00:06:35.019 ************************************ 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:35.019 * Looking for test storage... 00:06:35.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:35.019 12:10:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.019 12:10:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=441988 00:06:35.019 12:10:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 441988 00:06:35.019 12:10:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 441988 ']' 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:35.019 12:10:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.281 [2024-06-10 12:10:40.668842] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:35.281 [2024-06-10 12:10:40.668916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441988 ] 00:06:35.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.281 [2024-06-10 12:10:40.739547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.281 [2024-06-10 12:10:40.814743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.852 12:10:41 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.852 12:10:41 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:35.852 12:10:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:35.852 12:10:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:35.852 12:10:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:35.852 12:10:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:35.852 12:10:41 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:35.852 12:10:41 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:35.852 12:10:41 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.852 12:10:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.852 ************************************ 00:06:35.852 START TEST accel_assign_opcode 00:06:35.852 ************************************ 00:06:35.852 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:35.852 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:35.852 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.853 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.853 [2024-06-10 12:10:41.456655] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 [2024-06-10 12:10:41.468683] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.113 software 00:06:36.113 00:06:36.113 real 0m0.208s 00:06:36.113 user 0m0.048s 00:06:36.113 sys 0m0.012s 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.113 12:10:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 ************************************ 00:06:36.113 END TEST accel_assign_opcode 00:06:36.113 ************************************ 00:06:36.113 12:10:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 441988 00:06:36.113 12:10:41 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 441988 ']' 00:06:36.113 12:10:41 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 441988 00:06:36.113 12:10:41 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:36.113 12:10:41 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:36.113 12:10:41 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 441988 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 441988' 00:06:36.373 killing process with pid 441988 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@968 -- # kill 441988 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@973 -- # wait 441988 00:06:36.373 00:06:36.373 real 0m1.449s 00:06:36.373 user 0m1.524s 00:06:36.373 sys 0m0.399s 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.373 12:10:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.373 ************************************ 00:06:36.373 END TEST accel_rpc 00:06:36.373 ************************************ 00:06:36.633 12:10:41 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.633 12:10:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:36.633 12:10:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.633 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:06:36.633 ************************************ 00:06:36.633 START TEST app_cmdline 00:06:36.633 ************************************ 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.633 * Looking for test storage... 00:06:36.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.633 12:10:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.633 12:10:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=442378 00:06:36.633 12:10:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 442378 00:06:36.633 12:10:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 442378 ']' 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.633 12:10:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.633 [2024-06-10 12:10:42.177818] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:36.633 [2024-06-10 12:10:42.177883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442378 ] 00:06:36.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.893 [2024-06-10 12:10:42.251517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.893 [2024-06-10 12:10:42.324532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.465 12:10:42 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:37.465 12:10:42 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:37.465 12:10:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.727 { 00:06:37.727 "version": "SPDK v24.09-pre git sha1 c5e2a446d", 00:06:37.727 "fields": { 00:06:37.727 "major": 24, 00:06:37.727 "minor": 9, 00:06:37.727 "patch": 0, 00:06:37.727 "suffix": "-pre", 00:06:37.727 "commit": "c5e2a446d" 00:06:37.727 } 00:06:37.727 } 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.727 request: 00:06:37.727 { 00:06:37.727 "method": "env_dpdk_get_mem_stats", 00:06:37.727 "req_id": 1 00:06:37.727 } 00:06:37.727 Got JSON-RPC error response 00:06:37.727 response: 00:06:37.727 { 00:06:37.727 "code": -32601, 00:06:37.727 "message": "Method not found" 00:06:37.727 } 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:37.727 12:10:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 442378 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 442378 ']' 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 442378 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:37.727 12:10:43 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 442378 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 442378' 00:06:37.988 killing process with pid 442378 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@968 -- # kill 442378 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@973 -- # wait 442378 00:06:37.988 00:06:37.988 real 0m1.523s 00:06:37.988 user 0m1.800s 00:06:37.988 sys 0m0.412s 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.988 12:10:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.988 ************************************ 00:06:37.988 END TEST app_cmdline 00:06:37.988 ************************************ 00:06:37.988 12:10:43 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.988 12:10:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:37.988 12:10:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.988 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:38.250 ************************************ 00:06:38.250 START TEST version 00:06:38.250 ************************************ 00:06:38.250 12:10:43 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:38.250 * Looking for test storage... 00:06:38.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.250 12:10:43 version -- app/version.sh@17 -- # get_header_version major 00:06:38.250 12:10:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # cut -f2 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.250 12:10:43 version -- app/version.sh@17 -- # major=24 00:06:38.250 12:10:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.250 12:10:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # cut -f2 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.250 12:10:43 version -- app/version.sh@18 -- # minor=9 00:06:38.250 12:10:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.250 12:10:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # cut -f2 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.250 12:10:43 version -- app/version.sh@19 -- # patch=0 00:06:38.250 12:10:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.250 12:10:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # cut -f2 00:06:38.250 12:10:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.250 12:10:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.250 12:10:43 version -- app/version.sh@22 -- # version=24.9 00:06:38.250 12:10:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.250 12:10:43 version -- app/version.sh@28 -- # version=24.9rc0 00:06:38.251 12:10:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:38.251 12:10:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.251 12:10:43 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:38.251 12:10:43 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:38.251 00:06:38.251 real 0m0.166s 00:06:38.251 user 0m0.075s 00:06:38.251 sys 0m0.127s 00:06:38.251 12:10:43 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.251 12:10:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.251 ************************************ 00:06:38.251 END TEST version 00:06:38.251 ************************************ 00:06:38.251 12:10:43 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:38.251 12:10:43 -- spdk/autotest.sh@198 -- # uname -s 00:06:38.251 12:10:43 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:38.251 12:10:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:38.251 12:10:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:38.251 12:10:43 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:38.251 12:10:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:38.251 12:10:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:38.251 12:10:43 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:38.251 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 12:10:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:38.512 12:10:43 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:38.512 12:10:43 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:38.512 12:10:43 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:38.512 12:10:43 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:38.512 12:10:43 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:38.512 12:10:43 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.512 12:10:43 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:38.512 12:10:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.512 12:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 ************************************ 00:06:38.512 START TEST nvmf_tcp 00:06:38.512 ************************************ 00:06:38.512 12:10:43 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.512 * Looking for test storage... 00:06:38.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.512 12:10:43 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.512 12:10:44 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.512 12:10:44 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.512 12:10:44 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.512 12:10:44 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.512 12:10:44 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.512 12:10:44 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.512 12:10:44 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:38.512 12:10:44 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:38.512 12:10:44 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:38.512 12:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:38.512 12:10:44 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.512 12:10:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:38.512 12:10:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.512 12:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 ************************************ 00:06:38.512 START TEST nvmf_example 00:06:38.512 ************************************ 00:06:38.512 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.774 * Looking for test storage... 00:06:38.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.774 12:10:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:46.982 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:46.982 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:46.982 Found net devices under 0000:31:00.0: cvl_0_0 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:46.982 Found net devices under 0000:31:00.1: cvl_0_1 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.982 12:10:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:46.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:06:46.982 00:06:46.982 --- 10.0.0.2 ping statistics --- 00:06:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.982 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:06:46.982 00:06:46.982 --- 10.0.0.1 ping statistics --- 00:06:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.982 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=447157 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 447157 00:06:46.982 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 447157 ']' 00:06:46.983 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.983 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:46.983 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.983 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:46.983 12:10:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.552 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:47.812 12:10:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:47.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.803 Initializing NVMe Controllers 00:06:57.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:57.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:57.803 Initialization complete. Launching workers. 00:06:57.803 ======================================================== 00:06:57.803 Latency(us) 00:06:57.803 Device Information : IOPS MiB/s Average min max 00:06:57.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18460.09 72.11 3466.38 652.77 15833.01 00:06:57.803 ======================================================== 00:06:57.803 Total : 18460.09 72.11 3466.38 652.77 15833.01 00:06:57.803 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.065 rmmod nvme_tcp 00:06:58.065 rmmod nvme_fabrics 00:06:58.065 rmmod nvme_keyring 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 447157 ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 447157 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 447157 ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 447157 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 447157 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 447157' 00:06:58.065 killing process with pid 447157 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 447157 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 447157 00:06:58.065 nvmf threads initialize successfully 00:06:58.065 bdev subsystem init successfully 00:06:58.065 created a nvmf target service 00:06:58.065 create targets's poll groups done 00:06:58.065 all subsystems of target started 00:06:58.065 nvmf target is running 00:06:58.065 all subsystems of target stopped 00:06:58.065 destroy targets's poll groups done 00:06:58.065 destroyed the nvmf target service 00:06:58.065 bdev subsystem finish successfully 00:06:58.065 nvmf threads destroy successfully 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.065 12:11:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.615 00:07:00.615 real 0m21.722s 00:07:00.615 user 0m46.616s 00:07:00.615 sys 0m7.003s 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.615 12:11:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.615 ************************************ 00:07:00.615 END TEST nvmf_example 00:07:00.615 ************************************ 00:07:00.615 12:11:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:00.615 12:11:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:00.615 12:11:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.615 12:11:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.615 ************************************ 00:07:00.615 START TEST nvmf_filesystem 00:07:00.615 ************************************ 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:00.615 * Looking for test storage... 00:07:00.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:00.615 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:00.616 #define SPDK_CONFIG_H 00:07:00.616 #define SPDK_CONFIG_APPS 1 00:07:00.616 #define SPDK_CONFIG_ARCH native 00:07:00.616 #undef SPDK_CONFIG_ASAN 00:07:00.616 #undef SPDK_CONFIG_AVAHI 00:07:00.616 #undef SPDK_CONFIG_CET 00:07:00.616 #define SPDK_CONFIG_COVERAGE 1 00:07:00.616 #define SPDK_CONFIG_CROSS_PREFIX 00:07:00.616 #undef SPDK_CONFIG_CRYPTO 00:07:00.616 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:00.616 #undef SPDK_CONFIG_CUSTOMOCF 00:07:00.616 #undef SPDK_CONFIG_DAOS 00:07:00.616 #define SPDK_CONFIG_DAOS_DIR 00:07:00.616 #define SPDK_CONFIG_DEBUG 1 00:07:00.616 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:00.616 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:00.616 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:00.616 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:00.616 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:00.616 #undef SPDK_CONFIG_DPDK_UADK 00:07:00.616 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:00.616 #define SPDK_CONFIG_EXAMPLES 1 00:07:00.616 #undef SPDK_CONFIG_FC 00:07:00.616 #define SPDK_CONFIG_FC_PATH 00:07:00.616 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:00.616 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:00.616 #undef SPDK_CONFIG_FUSE 00:07:00.616 #undef SPDK_CONFIG_FUZZER 00:07:00.616 #define SPDK_CONFIG_FUZZER_LIB 00:07:00.616 #undef SPDK_CONFIG_GOLANG 00:07:00.616 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:00.616 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:00.616 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:00.616 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:00.616 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:00.616 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:00.616 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:00.616 #define SPDK_CONFIG_IDXD 1 00:07:00.616 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:00.616 #undef SPDK_CONFIG_IPSEC_MB 00:07:00.616 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:00.616 #define SPDK_CONFIG_ISAL 1 00:07:00.616 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:00.616 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:00.616 #define SPDK_CONFIG_LIBDIR 00:07:00.616 #undef SPDK_CONFIG_LTO 00:07:00.616 #define SPDK_CONFIG_MAX_LCORES 00:07:00.616 #define SPDK_CONFIG_NVME_CUSE 1 00:07:00.616 #undef SPDK_CONFIG_OCF 00:07:00.616 #define SPDK_CONFIG_OCF_PATH 00:07:00.616 #define SPDK_CONFIG_OPENSSL_PATH 00:07:00.616 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:00.616 #define SPDK_CONFIG_PGO_DIR 00:07:00.616 #undef SPDK_CONFIG_PGO_USE 00:07:00.616 #define SPDK_CONFIG_PREFIX /usr/local 00:07:00.616 #undef SPDK_CONFIG_RAID5F 00:07:00.616 #undef SPDK_CONFIG_RBD 00:07:00.616 #define SPDK_CONFIG_RDMA 1 00:07:00.616 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:00.616 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:00.616 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:00.616 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:00.616 #define SPDK_CONFIG_SHARED 1 00:07:00.616 #undef SPDK_CONFIG_SMA 00:07:00.616 #define SPDK_CONFIG_TESTS 1 00:07:00.616 #undef SPDK_CONFIG_TSAN 00:07:00.616 #define SPDK_CONFIG_UBLK 1 00:07:00.616 #define SPDK_CONFIG_UBSAN 1 00:07:00.616 #undef SPDK_CONFIG_UNIT_TESTS 00:07:00.616 #undef SPDK_CONFIG_URING 00:07:00.616 #define SPDK_CONFIG_URING_PATH 00:07:00.616 #undef SPDK_CONFIG_URING_ZNS 00:07:00.616 #undef SPDK_CONFIG_USDT 00:07:00.616 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:00.616 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:00.616 #define SPDK_CONFIG_VFIO_USER 1 00:07:00.616 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:00.616 #define SPDK_CONFIG_VHOST 1 00:07:00.616 #define SPDK_CONFIG_VIRTIO 1 00:07:00.616 #undef SPDK_CONFIG_VTUNE 00:07:00.616 #define SPDK_CONFIG_VTUNE_DIR 00:07:00.616 #define SPDK_CONFIG_WERROR 1 00:07:00.616 #define SPDK_CONFIG_WPDK_DIR 00:07:00.616 #undef SPDK_CONFIG_XNVME 00:07:00.616 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.616 12:11:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:00.617 12:11:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:00.617 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 449952 ]] 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 449952 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.dNPxuO 00:07:00.618 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dNPxuO/tests/target /tmp/spdk.dNPxuO 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122876502016 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370943488 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6494441472 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64629833728 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685469696 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864232960 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874190336 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9957376 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=353280 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=150528 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684810240 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685473792 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=663552 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937089024 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937093120 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:00.619 * Looking for test storage... 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122876502016 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8709033984 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.619 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:00.620 12:11:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:08.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:08.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:08.763 Found net devices under 0000:31:00.0: cvl_0_0 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:08.763 Found net devices under 0000:31:00.1: cvl_0_1 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.763 12:11:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.763 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:07:08.763 00:07:08.763 --- 10.0.0.2 ping statistics --- 00:07:08.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.763 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:07:08.764 00:07:08.764 --- 10.0.0.1 ping statistics --- 00:07:08.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.764 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.764 ************************************ 00:07:08.764 START TEST nvmf_filesystem_no_in_capsule 00:07:08.764 ************************************ 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=454267 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 454267 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 454267 ']' 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:08.764 12:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.025 [2024-06-10 12:11:14.422307] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:09.025 [2024-06-10 12:11:14.422386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.025 [2024-06-10 12:11:14.502078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.025 [2024-06-10 12:11:14.579834] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.025 [2024-06-10 12:11:14.579871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.025 [2024-06-10 12:11:14.579879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.025 [2024-06-10 12:11:14.579885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.025 [2024-06-10 12:11:14.579891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.025 [2024-06-10 12:11:14.580039] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.025 [2024-06-10 12:11:14.580158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.025 [2024-06-10 12:11:14.580297] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.025 [2024-06-10 12:11:14.580486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.596 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:09.596 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:09.596 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.596 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:09.596 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 [2024-06-10 12:11:15.240662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 [2024-06-10 12:11:15.368463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:09.858 { 00:07:09.858 "name": "Malloc1", 00:07:09.858 "aliases": [ 00:07:09.858 "34bd2112-0f0d-4a36-a442-291dd169355d" 00:07:09.858 ], 00:07:09.858 "product_name": "Malloc disk", 00:07:09.858 "block_size": 512, 00:07:09.858 "num_blocks": 1048576, 00:07:09.858 "uuid": "34bd2112-0f0d-4a36-a442-291dd169355d", 00:07:09.858 "assigned_rate_limits": { 00:07:09.858 "rw_ios_per_sec": 0, 00:07:09.858 "rw_mbytes_per_sec": 0, 00:07:09.858 "r_mbytes_per_sec": 0, 00:07:09.858 "w_mbytes_per_sec": 0 00:07:09.858 }, 00:07:09.858 "claimed": true, 00:07:09.858 "claim_type": "exclusive_write", 00:07:09.858 "zoned": false, 00:07:09.858 "supported_io_types": { 00:07:09.858 "read": true, 00:07:09.858 "write": true, 00:07:09.858 "unmap": true, 00:07:09.858 "write_zeroes": true, 00:07:09.858 "flush": true, 00:07:09.858 "reset": true, 00:07:09.858 "compare": false, 00:07:09.858 "compare_and_write": false, 00:07:09.858 "abort": true, 00:07:09.858 "nvme_admin": false, 00:07:09.858 "nvme_io": false 00:07:09.858 }, 00:07:09.858 "memory_domains": [ 00:07:09.858 { 00:07:09.858 "dma_device_id": "system", 00:07:09.858 "dma_device_type": 1 00:07:09.858 }, 00:07:09.858 { 00:07:09.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.858 "dma_device_type": 2 00:07:09.858 } 00:07:09.858 ], 00:07:09.858 "driver_specific": {} 00:07:09.858 } 00:07:09.858 ]' 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:09.858 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:10.127 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:10.127 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:10.127 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:10.127 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:10.127 12:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.515 12:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.515 12:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:11.515 12:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.515 12:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:11.515 12:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:13.477 12:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:13.752 12:11:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:13.752 12:11:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 ************************************ 00:07:14.756 START TEST filesystem_ext4 00:07:14.756 ************************************ 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:14.756 12:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:14.756 mke2fs 1.46.5 (30-Dec-2021) 00:07:15.016 Discarding device blocks: 0/522240 done 00:07:15.016 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:15.016 Filesystem UUID: 9f902d40-2b8d-41e0-8e6e-05d0c1348948 00:07:15.016 Superblock backups stored on blocks: 00:07:15.016 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:15.016 00:07:15.016 Allocating group tables: 0/64 done 00:07:15.016 Writing inode tables: 0/64 done 00:07:16.399 Creating journal (8192 blocks): done 00:07:16.399 Writing superblocks and filesystem accounting information: 0/64 done 00:07:16.399 00:07:16.399 12:11:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:16.399 12:11:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 454267 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.661 00:07:16.661 real 0m1.817s 00:07:16.661 user 0m0.022s 00:07:16.661 sys 0m0.052s 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:16.661 ************************************ 00:07:16.661 END TEST filesystem_ext4 00:07:16.661 ************************************ 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.661 ************************************ 00:07:16.661 START TEST filesystem_btrfs 00:07:16.661 ************************************ 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:16.661 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:16.922 btrfs-progs v6.6.2 00:07:16.922 See https://btrfs.readthedocs.io for more information. 00:07:16.922 00:07:16.922 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:16.922 NOTE: several default settings have changed in version 5.15, please make sure 00:07:16.922 this does not affect your deployments: 00:07:16.922 - DUP for metadata (-m dup) 00:07:16.922 - enabled no-holes (-O no-holes) 00:07:16.922 - enabled free-space-tree (-R free-space-tree) 00:07:16.922 00:07:16.922 Label: (null) 00:07:16.922 UUID: 5325ad0a-9842-4cb7-afb5-7b4181986616 00:07:16.922 Node size: 16384 00:07:16.922 Sector size: 4096 00:07:16.922 Filesystem size: 510.00MiB 00:07:16.922 Block group profiles: 00:07:16.922 Data: single 8.00MiB 00:07:16.922 Metadata: DUP 32.00MiB 00:07:16.922 System: DUP 8.00MiB 00:07:16.922 SSD detected: yes 00:07:16.922 Zoned device: no 00:07:16.922 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:16.922 Runtime features: free-space-tree 00:07:16.922 Checksum: crc32c 00:07:16.922 Number of devices: 1 00:07:16.922 Devices: 00:07:16.922 ID SIZE PATH 00:07:16.922 1 510.00MiB /dev/nvme0n1p1 00:07:16.922 00:07:16.922 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:16.922 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 454267 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.495 00:07:17.495 real 0m0.633s 00:07:17.495 user 0m0.027s 00:07:17.495 sys 0m0.062s 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.495 ************************************ 00:07:17.495 END TEST filesystem_btrfs 00:07:17.495 ************************************ 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.495 ************************************ 00:07:17.495 START TEST filesystem_xfs 00:07:17.495 ************************************ 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:17.495 12:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:17.495 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:17.495 = sectsz=512 attr=2, projid32bit=1 00:07:17.495 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:17.495 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:17.495 data = bsize=4096 blocks=130560, imaxpct=25 00:07:17.495 = sunit=0 swidth=0 blks 00:07:17.495 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:17.495 log =internal log bsize=4096 blocks=16384, version=2 00:07:17.495 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:17.495 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:18.882 Discarding blocks...Done. 00:07:18.882 12:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:18.882 12:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 454267 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.793 00:07:20.793 real 0m3.414s 00:07:20.793 user 0m0.025s 00:07:20.793 sys 0m0.056s 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.793 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.793 ************************************ 00:07:20.793 END TEST filesystem_xfs 00:07:20.793 ************************************ 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 454267 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 454267 ']' 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 454267 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:21.054 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 454267 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 454267' 00:07:21.313 killing process with pid 454267 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 454267 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 454267 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:21.313 00:07:21.313 real 0m12.561s 00:07:21.313 user 0m49.390s 00:07:21.313 sys 0m1.069s 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:21.313 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 ************************************ 00:07:21.313 END TEST nvmf_filesystem_no_in_capsule 00:07:21.313 ************************************ 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 ************************************ 00:07:21.573 START TEST nvmf_filesystem_in_capsule 00:07:21.573 ************************************ 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=456929 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 456929 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 456929 ']' 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.573 12:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:21.573 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 [2024-06-10 12:11:27.050786] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:21.573 [2024-06-10 12:11:27.050842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.573 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.573 [2024-06-10 12:11:27.126906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.834 [2024-06-10 12:11:27.200415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.834 [2024-06-10 12:11:27.200452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.834 [2024-06-10 12:11:27.200460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.834 [2024-06-10 12:11:27.200466] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.834 [2024-06-10 12:11:27.200472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.834 [2024-06-10 12:11:27.200615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.834 [2024-06-10 12:11:27.200724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.834 [2024-06-10 12:11:27.200880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.834 [2024-06-10 12:11:27.200882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 [2024-06-10 12:11:27.871867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 Malloc1 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 [2024-06-10 12:11:27.998425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.405 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.666 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.666 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:22.666 { 00:07:22.666 "name": "Malloc1", 00:07:22.666 "aliases": [ 00:07:22.666 "856aa2f7-dc6c-4b5f-884d-76a5b89c2ca4" 00:07:22.666 ], 00:07:22.666 "product_name": "Malloc disk", 00:07:22.666 "block_size": 512, 00:07:22.666 "num_blocks": 1048576, 00:07:22.666 "uuid": "856aa2f7-dc6c-4b5f-884d-76a5b89c2ca4", 00:07:22.666 "assigned_rate_limits": { 00:07:22.666 "rw_ios_per_sec": 0, 00:07:22.666 "rw_mbytes_per_sec": 0, 00:07:22.666 "r_mbytes_per_sec": 0, 00:07:22.666 "w_mbytes_per_sec": 0 00:07:22.666 }, 00:07:22.666 "claimed": true, 00:07:22.666 "claim_type": "exclusive_write", 00:07:22.666 "zoned": false, 00:07:22.666 "supported_io_types": { 00:07:22.666 "read": true, 00:07:22.666 "write": true, 00:07:22.666 "unmap": true, 00:07:22.666 "write_zeroes": true, 00:07:22.666 "flush": true, 00:07:22.666 "reset": true, 00:07:22.666 "compare": false, 00:07:22.666 "compare_and_write": false, 00:07:22.666 "abort": true, 00:07:22.666 "nvme_admin": false, 00:07:22.667 "nvme_io": false 00:07:22.667 }, 00:07:22.667 "memory_domains": [ 00:07:22.667 { 00:07:22.667 "dma_device_id": "system", 00:07:22.667 "dma_device_type": 1 00:07:22.667 }, 00:07:22.667 { 00:07:22.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.667 "dma_device_type": 2 00:07:22.667 } 00:07:22.667 ], 00:07:22.667 "driver_specific": {} 00:07:22.667 } 00:07:22.667 ]' 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:22.667 12:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.053 12:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.053 12:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:24.053 12:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.053 12:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:24.053 12:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:26.598 12:11:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.598 12:11:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:26.598 12:11:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.538 ************************************ 00:07:27.538 START TEST filesystem_in_capsule_ext4 00:07:27.538 ************************************ 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:27.538 12:11:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:27.538 mke2fs 1.46.5 (30-Dec-2021) 00:07:27.798 Discarding device blocks: 0/522240 done 00:07:27.798 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:27.798 Filesystem UUID: 95c8a62c-80de-46ec-9662-0870f904d3f6 00:07:27.798 Superblock backups stored on blocks: 00:07:27.798 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:27.798 00:07:27.798 Allocating group tables: 0/64 done 00:07:27.798 Writing inode tables: 0/64 done 00:07:31.096 Creating journal (8192 blocks): done 00:07:31.096 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:31.096 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 456929 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.096 00:07:31.096 real 0m3.210s 00:07:31.096 user 0m0.026s 00:07:31.096 sys 0m0.049s 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:31.096 ************************************ 00:07:31.096 END TEST filesystem_in_capsule_ext4 00:07:31.096 ************************************ 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.096 ************************************ 00:07:31.096 START TEST filesystem_in_capsule_btrfs 00:07:31.096 ************************************ 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:31.096 btrfs-progs v6.6.2 00:07:31.096 See https://btrfs.readthedocs.io for more information. 00:07:31.096 00:07:31.096 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:31.096 NOTE: several default settings have changed in version 5.15, please make sure 00:07:31.096 this does not affect your deployments: 00:07:31.096 - DUP for metadata (-m dup) 00:07:31.096 - enabled no-holes (-O no-holes) 00:07:31.096 - enabled free-space-tree (-R free-space-tree) 00:07:31.096 00:07:31.096 Label: (null) 00:07:31.096 UUID: fbaeba85-b036-4eec-b2fc-048758e884e4 00:07:31.096 Node size: 16384 00:07:31.096 Sector size: 4096 00:07:31.096 Filesystem size: 510.00MiB 00:07:31.096 Block group profiles: 00:07:31.096 Data: single 8.00MiB 00:07:31.096 Metadata: DUP 32.00MiB 00:07:31.096 System: DUP 8.00MiB 00:07:31.096 SSD detected: yes 00:07:31.096 Zoned device: no 00:07:31.096 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:31.096 Runtime features: free-space-tree 00:07:31.096 Checksum: crc32c 00:07:31.096 Number of devices: 1 00:07:31.096 Devices: 00:07:31.096 ID SIZE PATH 00:07:31.096 1 510.00MiB /dev/nvme0n1p1 00:07:31.096 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:31.096 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 456929 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.357 00:07:31.357 real 0m0.439s 00:07:31.357 user 0m0.025s 00:07:31.357 sys 0m0.061s 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:31.357 ************************************ 00:07:31.357 END TEST filesystem_in_capsule_btrfs 00:07:31.357 ************************************ 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.357 ************************************ 00:07:31.357 START TEST filesystem_in_capsule_xfs 00:07:31.357 ************************************ 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:31.357 12:11:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:31.617 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:31.617 = sectsz=512 attr=2, projid32bit=1 00:07:31.617 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:31.617 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:31.617 data = bsize=4096 blocks=130560, imaxpct=25 00:07:31.617 = sunit=0 swidth=0 blks 00:07:31.617 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:31.617 log =internal log bsize=4096 blocks=16384, version=2 00:07:31.617 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:31.617 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:32.191 Discarding blocks...Done. 00:07:32.191 12:11:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:32.191 12:11:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.101 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.101 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:34.101 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.101 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 456929 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.102 00:07:34.102 real 0m2.719s 00:07:34.102 user 0m0.017s 00:07:34.102 sys 0m0.062s 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:34.102 ************************************ 00:07:34.102 END TEST filesystem_in_capsule_xfs 00:07:34.102 ************************************ 00:07:34.102 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 456929 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 456929 ']' 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 456929 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:34.362 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 456929 00:07:34.622 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:34.622 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:34.622 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 456929' 00:07:34.622 killing process with pid 456929 00:07:34.622 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 456929 00:07:34.622 12:11:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 456929 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.883 00:07:34.883 real 0m13.242s 00:07:34.883 user 0m52.126s 00:07:34.883 sys 0m1.082s 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.883 ************************************ 00:07:34.883 END TEST nvmf_filesystem_in_capsule 00:07:34.883 ************************************ 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.883 rmmod nvme_tcp 00:07:34.883 rmmod nvme_fabrics 00:07:34.883 rmmod nvme_keyring 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.883 12:11:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.426 12:11:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.426 00:07:37.426 real 0m36.551s 00:07:37.426 user 1m43.934s 00:07:37.426 sys 0m8.400s 00:07:37.426 12:11:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.426 12:11:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.426 ************************************ 00:07:37.426 END TEST nvmf_filesystem 00:07:37.426 ************************************ 00:07:37.426 12:11:42 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:37.426 12:11:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:37.426 12:11:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.426 12:11:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.426 ************************************ 00:07:37.426 START TEST nvmf_target_discovery 00:07:37.426 ************************************ 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:37.426 * Looking for test storage... 00:07:37.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.426 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.427 12:11:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.427 12:11:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:45.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:45.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:45.564 Found net devices under 0000:31:00.0: cvl_0_0 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:45.564 Found net devices under 0000:31:00.1: cvl_0_1 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.564 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:07:45.565 00:07:45.565 --- 10.0.0.2 ping statistics --- 00:07:45.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.565 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:07:45.565 00:07:45.565 --- 10.0.0.1 ping statistics --- 00:07:45.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.565 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=464446 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 464446 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 464446 ']' 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:45.565 12:11:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.565 [2024-06-10 12:11:50.911129] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:45.565 [2024-06-10 12:11:50.911221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.565 [2024-06-10 12:11:50.990518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.565 [2024-06-10 12:11:51.066134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.565 [2024-06-10 12:11:51.066173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.565 [2024-06-10 12:11:51.066181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.565 [2024-06-10 12:11:51.066188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.565 [2024-06-10 12:11:51.066199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.565 [2024-06-10 12:11:51.066282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.565 [2024-06-10 12:11:51.066417] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.565 [2024-06-10 12:11:51.066575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.565 [2024-06-10 12:11:51.066576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.135 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 [2024-06-10 12:11:51.741790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 Null1 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 [2024-06-10 12:11:51.798091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 Null2 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 Null3 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 Null4 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.396 12:11:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:07:46.657 00:07:46.657 Discovery Log Number of Records 6, Generation counter 6 00:07:46.657 =====Discovery Log Entry 0====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: current discovery subsystem 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4420 00:07:46.657 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: explicit discovery connections, duplicate discovery information 00:07:46.657 sectype: none 00:07:46.657 =====Discovery Log Entry 1====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: nvme subsystem 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4420 00:07:46.657 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: none 00:07:46.657 sectype: none 00:07:46.657 =====Discovery Log Entry 2====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: nvme subsystem 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4420 00:07:46.657 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: none 00:07:46.657 sectype: none 00:07:46.657 =====Discovery Log Entry 3====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: nvme subsystem 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4420 00:07:46.657 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: none 00:07:46.657 sectype: none 00:07:46.657 =====Discovery Log Entry 4====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: nvme subsystem 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4420 00:07:46.657 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: none 00:07:46.657 sectype: none 00:07:46.657 =====Discovery Log Entry 5====== 00:07:46.657 trtype: tcp 00:07:46.657 adrfam: ipv4 00:07:46.657 subtype: discovery subsystem referral 00:07:46.657 treq: not required 00:07:46.657 portid: 0 00:07:46.657 trsvcid: 4430 00:07:46.657 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:46.657 traddr: 10.0.0.2 00:07:46.657 eflags: none 00:07:46.657 sectype: none 00:07:46.657 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:46.657 Perform nvmf subsystem discovery via RPC 00:07:46.657 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:46.657 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.657 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.657 [ 00:07:46.657 { 00:07:46.657 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:46.657 "subtype": "Discovery", 00:07:46.657 "listen_addresses": [ 00:07:46.657 { 00:07:46.657 "trtype": "TCP", 00:07:46.657 "adrfam": "IPv4", 00:07:46.657 "traddr": "10.0.0.2", 00:07:46.657 "trsvcid": "4420" 00:07:46.657 } 00:07:46.657 ], 00:07:46.657 "allow_any_host": true, 00:07:46.658 "hosts": [] 00:07:46.658 }, 00:07:46.658 { 00:07:46.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.658 "subtype": "NVMe", 00:07:46.658 "listen_addresses": [ 00:07:46.658 { 00:07:46.658 "trtype": "TCP", 00:07:46.658 "adrfam": "IPv4", 00:07:46.658 "traddr": "10.0.0.2", 00:07:46.658 "trsvcid": "4420" 00:07:46.658 } 00:07:46.658 ], 00:07:46.658 "allow_any_host": true, 00:07:46.658 "hosts": [], 00:07:46.658 "serial_number": "SPDK00000000000001", 00:07:46.658 "model_number": "SPDK bdev Controller", 00:07:46.658 "max_namespaces": 32, 00:07:46.658 "min_cntlid": 1, 00:07:46.658 "max_cntlid": 65519, 00:07:46.658 "namespaces": [ 00:07:46.658 { 00:07:46.658 "nsid": 1, 00:07:46.658 "bdev_name": "Null1", 00:07:46.658 "name": "Null1", 00:07:46.658 "nguid": "D82E63E5732F4929B6215EF3499691AD", 00:07:46.658 "uuid": "d82e63e5-732f-4929-b621-5ef3499691ad" 00:07:46.658 } 00:07:46.658 ] 00:07:46.658 }, 00:07:46.658 { 00:07:46.658 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:46.658 "subtype": "NVMe", 00:07:46.658 "listen_addresses": [ 00:07:46.658 { 00:07:46.658 "trtype": "TCP", 00:07:46.658 "adrfam": "IPv4", 00:07:46.658 "traddr": "10.0.0.2", 00:07:46.658 "trsvcid": "4420" 00:07:46.658 } 00:07:46.658 ], 00:07:46.658 "allow_any_host": true, 00:07:46.658 "hosts": [], 00:07:46.658 "serial_number": "SPDK00000000000002", 00:07:46.658 "model_number": "SPDK bdev Controller", 00:07:46.658 "max_namespaces": 32, 00:07:46.658 "min_cntlid": 1, 00:07:46.658 "max_cntlid": 65519, 00:07:46.658 "namespaces": [ 00:07:46.658 { 00:07:46.658 "nsid": 1, 00:07:46.658 "bdev_name": "Null2", 00:07:46.658 "name": "Null2", 00:07:46.658 "nguid": "57FC8B65EABC48C285C8FDA10AA4526C", 00:07:46.658 "uuid": "57fc8b65-eabc-48c2-85c8-fda10aa4526c" 00:07:46.658 } 00:07:46.658 ] 00:07:46.658 }, 00:07:46.658 { 00:07:46.658 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:46.658 "subtype": "NVMe", 00:07:46.658 "listen_addresses": [ 00:07:46.658 { 00:07:46.658 "trtype": "TCP", 00:07:46.658 "adrfam": "IPv4", 00:07:46.658 "traddr": "10.0.0.2", 00:07:46.658 "trsvcid": "4420" 00:07:46.658 } 00:07:46.658 ], 00:07:46.658 "allow_any_host": true, 00:07:46.658 "hosts": [], 00:07:46.658 "serial_number": "SPDK00000000000003", 00:07:46.658 "model_number": "SPDK bdev Controller", 00:07:46.658 "max_namespaces": 32, 00:07:46.658 "min_cntlid": 1, 00:07:46.658 "max_cntlid": 65519, 00:07:46.658 "namespaces": [ 00:07:46.658 { 00:07:46.658 "nsid": 1, 00:07:46.658 "bdev_name": "Null3", 00:07:46.658 "name": "Null3", 00:07:46.658 "nguid": "5EF98542D82A4700A8B6017646AD023E", 00:07:46.658 "uuid": "5ef98542-d82a-4700-a8b6-017646ad023e" 00:07:46.658 } 00:07:46.658 ] 00:07:46.658 }, 00:07:46.658 { 00:07:46.658 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:46.658 "subtype": "NVMe", 00:07:46.658 "listen_addresses": [ 00:07:46.658 { 00:07:46.658 "trtype": "TCP", 00:07:46.658 "adrfam": "IPv4", 00:07:46.658 "traddr": "10.0.0.2", 00:07:46.658 "trsvcid": "4420" 00:07:46.658 } 00:07:46.658 ], 00:07:46.658 "allow_any_host": true, 00:07:46.658 "hosts": [], 00:07:46.658 "serial_number": "SPDK00000000000004", 00:07:46.658 "model_number": "SPDK bdev Controller", 00:07:46.658 "max_namespaces": 32, 00:07:46.658 "min_cntlid": 1, 00:07:46.658 "max_cntlid": 65519, 00:07:46.658 "namespaces": [ 00:07:46.658 { 00:07:46.658 "nsid": 1, 00:07:46.658 "bdev_name": "Null4", 00:07:46.658 "name": "Null4", 00:07:46.658 "nguid": "1461310DB3AF439CB9C74268E4DB7DFC", 00:07:46.658 "uuid": "1461310d-b3af-439c-b9c7-4268e4db7dfc" 00:07:46.658 } 00:07:46.658 ] 00:07:46.658 } 00:07:46.658 ] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.918 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:46.918 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.919 rmmod nvme_tcp 00:07:46.919 rmmod nvme_fabrics 00:07:46.919 rmmod nvme_keyring 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 464446 ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 464446 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 464446 ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 464446 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 464446 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 464446' 00:07:46.919 killing process with pid 464446 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 464446 00:07:46.919 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 464446 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.179 12:11:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.092 12:11:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.092 00:07:49.092 real 0m12.108s 00:07:49.092 user 0m8.378s 00:07:49.092 sys 0m6.362s 00:07:49.092 12:11:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.092 12:11:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.092 ************************************ 00:07:49.092 END TEST nvmf_target_discovery 00:07:49.092 ************************************ 00:07:49.092 12:11:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:49.092 12:11:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:49.092 12:11:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.092 12:11:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.092 ************************************ 00:07:49.092 START TEST nvmf_referrals 00:07:49.092 ************************************ 00:07:49.092 12:11:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:49.354 * Looking for test storage... 00:07:49.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.354 12:11:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.490 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:57.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:57.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:57.491 Found net devices under 0000:31:00.0: cvl_0_0 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:57.491 Found net devices under 0000:31:00.1: cvl_0_1 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.491 12:12:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:07:57.491 00:07:57.491 --- 10.0.0.2 ping statistics --- 00:07:57.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.491 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:07:57.491 00:07:57.491 --- 10.0.0.1 ping statistics --- 00:07:57.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.491 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=469618 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 469618 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 469618 ']' 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:57.491 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.752 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:57.752 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.752 [2024-06-10 12:12:03.145596] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:57.752 [2024-06-10 12:12:03.145658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.752 [2024-06-10 12:12:03.226535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.752 [2024-06-10 12:12:03.303670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.752 [2024-06-10 12:12:03.303712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.752 [2024-06-10 12:12:03.303719] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.752 [2024-06-10 12:12:03.303726] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.752 [2024-06-10 12:12:03.303731] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.752 [2024-06-10 12:12:03.303869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.752 [2024-06-10 12:12:03.303982] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.752 [2024-06-10 12:12:03.304141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.752 [2024-06-10 12:12:03.304143] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 [2024-06-10 12:12:03.975717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 [2024-06-10 12:12:03.989448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.698 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:59.021 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:59.022 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.022 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.022 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.022 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.022 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.282 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.543 12:12:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.543 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.804 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:59.805 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.066 rmmod nvme_tcp 00:08:00.066 rmmod nvme_fabrics 00:08:00.066 rmmod nvme_keyring 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 469618 ']' 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 469618 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 469618 ']' 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 469618 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 469618 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 469618' 00:08:00.066 killing process with pid 469618 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 469618 00:08:00.066 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 469618 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.327 12:12:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.240 12:12:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.240 00:08:02.240 real 0m13.116s 00:08:02.240 user 0m12.859s 00:08:02.240 sys 0m6.671s 00:08:02.240 12:12:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.240 12:12:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.240 ************************************ 00:08:02.240 END TEST nvmf_referrals 00:08:02.240 ************************************ 00:08:02.240 12:12:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:02.240 12:12:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:02.240 12:12:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.240 12:12:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.500 ************************************ 00:08:02.500 START TEST nvmf_connect_disconnect 00:08:02.500 ************************************ 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:02.500 * Looking for test storage... 00:08:02.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.500 12:12:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.500 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.500 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.500 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.501 12:12:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.648 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:10.649 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:10.649 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:10.649 Found net devices under 0000:31:00.0: cvl_0_0 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:10.649 Found net devices under 0000:31:00.1: cvl_0_1 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.649 12:12:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:08:10.649 00:08:10.649 --- 10.0.0.2 ping statistics --- 00:08:10.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.649 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:08:10.649 00:08:10.649 --- 10.0.0.1 ping statistics --- 00:08:10.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.649 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:10.649 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=475487 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 475487 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 475487 ']' 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:10.650 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.650 [2024-06-10 12:12:16.198040] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:10.650 [2024-06-10 12:12:16.198090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.650 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.910 [2024-06-10 12:12:16.270917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.910 [2024-06-10 12:12:16.335684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.910 [2024-06-10 12:12:16.335720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.910 [2024-06-10 12:12:16.335727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.910 [2024-06-10 12:12:16.335734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.910 [2024-06-10 12:12:16.335739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.910 [2024-06-10 12:12:16.335883] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.910 [2024-06-10 12:12:16.335997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.910 [2024-06-10 12:12:16.336151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.910 [2024-06-10 12:12:16.336152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.479 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:11.479 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:11.479 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.479 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:11.479 12:12:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 [2024-06-10 12:12:17.008744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.479 [2024-06-10 12:12:17.068038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:11.479 12:12:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:15.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.825 rmmod nvme_tcp 00:08:29.825 rmmod nvme_fabrics 00:08:29.825 rmmod nvme_keyring 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 475487 ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 475487 ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 475487' 00:08:29.825 killing process with pid 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 475487 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.825 12:12:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.374 12:12:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.374 00:08:32.374 real 0m29.598s 00:08:32.374 user 1m18.414s 00:08:32.374 sys 0m6.930s 00:08:32.374 12:12:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.374 12:12:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.374 ************************************ 00:08:32.374 END TEST nvmf_connect_disconnect 00:08:32.374 ************************************ 00:08:32.374 12:12:37 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:32.374 12:12:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:32.374 12:12:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.374 12:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.374 ************************************ 00:08:32.374 START TEST nvmf_multitarget 00:08:32.374 ************************************ 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:32.374 * Looking for test storage... 00:08:32.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.374 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.375 12:12:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:40.515 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:40.515 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:40.515 Found net devices under 0000:31:00.0: cvl_0_0 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:40.515 Found net devices under 0000:31:00.1: cvl_0_1 00:08:40.515 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.789 ms 00:08:40.516 00:08:40.516 --- 10.0.0.2 ping statistics --- 00:08:40.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.516 rtt min/avg/max/mdev = 0.789/0.789/0.789/0.000 ms 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:40.516 00:08:40.516 --- 10.0.0.1 ping statistics --- 00:08:40.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.516 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=483973 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 483973 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 483973 ']' 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:40.516 12:12:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:40.516 [2024-06-10 12:12:45.834189] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:40.516 [2024-06-10 12:12:45.834242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.516 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.516 [2024-06-10 12:12:45.906968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.516 [2024-06-10 12:12:45.972144] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.516 [2024-06-10 12:12:45.972180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.516 [2024-06-10 12:12:45.972187] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.516 [2024-06-10 12:12:45.972198] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.516 [2024-06-10 12:12:45.972204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.516 [2024-06-10 12:12:45.972314] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.516 [2024-06-10 12:12:45.972428] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.516 [2024-06-10 12:12:45.972580] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.516 [2024-06-10 12:12:45.972581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:41.089 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:41.349 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:41.349 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:41.349 "nvmf_tgt_1" 00:08:41.349 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:41.349 "nvmf_tgt_2" 00:08:41.349 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:41.349 12:12:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:41.610 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:41.610 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:41.610 true 00:08:41.610 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:41.871 true 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.871 rmmod nvme_tcp 00:08:41.871 rmmod nvme_fabrics 00:08:41.871 rmmod nvme_keyring 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 483973 ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 483973 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 483973 ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 483973 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 483973 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 483973' 00:08:41.871 killing process with pid 483973 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 483973 00:08:41.871 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 483973 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.133 12:12:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.680 12:12:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.680 00:08:44.680 real 0m12.129s 00:08:44.680 user 0m9.515s 00:08:44.680 sys 0m6.388s 00:08:44.680 12:12:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:44.680 12:12:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.680 ************************************ 00:08:44.680 END TEST nvmf_multitarget 00:08:44.680 ************************************ 00:08:44.680 12:12:49 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:44.680 12:12:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:44.680 12:12:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:44.680 12:12:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.680 ************************************ 00:08:44.681 START TEST nvmf_rpc 00:08:44.681 ************************************ 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:44.681 * Looking for test storage... 00:08:44.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.681 12:12:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:52.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:52.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:52.827 Found net devices under 0000:31:00.0: cvl_0_0 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:52.827 Found net devices under 0000:31:00.1: cvl_0_1 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.827 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:08:52.828 00:08:52.828 --- 10.0.0.2 ping statistics --- 00:08:52.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.828 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:52.828 00:08:52.828 --- 10.0.0.1 ping statistics --- 00:08:52.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.828 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.828 12:12:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=489026 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 489026 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 489026 ']' 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:52.828 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.828 [2024-06-10 12:12:58.071304] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:52.828 [2024-06-10 12:12:58.071370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.828 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.828 [2024-06-10 12:12:58.149490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.828 [2024-06-10 12:12:58.224313] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.828 [2024-06-10 12:12:58.224352] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.828 [2024-06-10 12:12:58.224360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.828 [2024-06-10 12:12:58.224366] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.828 [2024-06-10 12:12:58.224372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.828 [2024-06-10 12:12:58.224510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.828 [2024-06-10 12:12:58.224626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.828 [2024-06-10 12:12:58.224781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.828 [2024-06-10 12:12:58.224781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.400 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:53.400 "tick_rate": 2400000000, 00:08:53.400 "poll_groups": [ 00:08:53.400 { 00:08:53.400 "name": "nvmf_tgt_poll_group_000", 00:08:53.400 "admin_qpairs": 0, 00:08:53.400 "io_qpairs": 0, 00:08:53.401 "current_admin_qpairs": 0, 00:08:53.401 "current_io_qpairs": 0, 00:08:53.401 "pending_bdev_io": 0, 00:08:53.401 "completed_nvme_io": 0, 00:08:53.401 "transports": [] 00:08:53.401 }, 00:08:53.401 { 00:08:53.401 "name": "nvmf_tgt_poll_group_001", 00:08:53.401 "admin_qpairs": 0, 00:08:53.401 "io_qpairs": 0, 00:08:53.401 "current_admin_qpairs": 0, 00:08:53.401 "current_io_qpairs": 0, 00:08:53.401 "pending_bdev_io": 0, 00:08:53.401 "completed_nvme_io": 0, 00:08:53.401 "transports": [] 00:08:53.401 }, 00:08:53.401 { 00:08:53.401 "name": "nvmf_tgt_poll_group_002", 00:08:53.401 "admin_qpairs": 0, 00:08:53.401 "io_qpairs": 0, 00:08:53.401 "current_admin_qpairs": 0, 00:08:53.401 "current_io_qpairs": 0, 00:08:53.401 "pending_bdev_io": 0, 00:08:53.401 "completed_nvme_io": 0, 00:08:53.401 "transports": [] 00:08:53.401 }, 00:08:53.401 { 00:08:53.401 "name": "nvmf_tgt_poll_group_003", 00:08:53.401 "admin_qpairs": 0, 00:08:53.401 "io_qpairs": 0, 00:08:53.401 "current_admin_qpairs": 0, 00:08:53.401 "current_io_qpairs": 0, 00:08:53.401 "pending_bdev_io": 0, 00:08:53.401 "completed_nvme_io": 0, 00:08:53.401 "transports": [] 00:08:53.401 } 00:08:53.401 ] 00:08:53.401 }' 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.401 12:12:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.401 [2024-06-10 12:12:59.001974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.661 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:53.661 "tick_rate": 2400000000, 00:08:53.661 "poll_groups": [ 00:08:53.661 { 00:08:53.661 "name": "nvmf_tgt_poll_group_000", 00:08:53.661 "admin_qpairs": 0, 00:08:53.661 "io_qpairs": 0, 00:08:53.661 "current_admin_qpairs": 0, 00:08:53.661 "current_io_qpairs": 0, 00:08:53.661 "pending_bdev_io": 0, 00:08:53.661 "completed_nvme_io": 0, 00:08:53.661 "transports": [ 00:08:53.661 { 00:08:53.661 "trtype": "TCP" 00:08:53.661 } 00:08:53.661 ] 00:08:53.661 }, 00:08:53.661 { 00:08:53.661 "name": "nvmf_tgt_poll_group_001", 00:08:53.661 "admin_qpairs": 0, 00:08:53.661 "io_qpairs": 0, 00:08:53.661 "current_admin_qpairs": 0, 00:08:53.661 "current_io_qpairs": 0, 00:08:53.661 "pending_bdev_io": 0, 00:08:53.661 "completed_nvme_io": 0, 00:08:53.661 "transports": [ 00:08:53.661 { 00:08:53.661 "trtype": "TCP" 00:08:53.661 } 00:08:53.661 ] 00:08:53.661 }, 00:08:53.661 { 00:08:53.661 "name": "nvmf_tgt_poll_group_002", 00:08:53.661 "admin_qpairs": 0, 00:08:53.661 "io_qpairs": 0, 00:08:53.661 "current_admin_qpairs": 0, 00:08:53.661 "current_io_qpairs": 0, 00:08:53.662 "pending_bdev_io": 0, 00:08:53.662 "completed_nvme_io": 0, 00:08:53.662 "transports": [ 00:08:53.662 { 00:08:53.662 "trtype": "TCP" 00:08:53.662 } 00:08:53.662 ] 00:08:53.662 }, 00:08:53.662 { 00:08:53.662 "name": "nvmf_tgt_poll_group_003", 00:08:53.662 "admin_qpairs": 0, 00:08:53.662 "io_qpairs": 0, 00:08:53.662 "current_admin_qpairs": 0, 00:08:53.662 "current_io_qpairs": 0, 00:08:53.662 "pending_bdev_io": 0, 00:08:53.662 "completed_nvme_io": 0, 00:08:53.662 "transports": [ 00:08:53.662 { 00:08:53.662 "trtype": "TCP" 00:08:53.662 } 00:08:53.662 ] 00:08:53.662 } 00:08:53.662 ] 00:08:53.662 }' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 Malloc1 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 [2024-06-10 12:12:59.189598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:08:53.662 [2024-06-10 12:12:59.216489] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:08:53.662 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:53.662 could not add new controller: failed to write to nvme-fabrics device 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.662 12:12:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:55.635 12:13:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:55.635 12:13:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:55.635 12:13:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.635 12:13:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:55.635 12:13:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:57.546 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.547 [2024-06-10 12:13:02.901702] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:08:57.547 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:57.547 could not add new controller: failed to write to nvme-fabrics device 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:57.547 12:13:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.928 12:13:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.928 12:13:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:58.928 12:13:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.928 12:13:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:58.928 12:13:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:01.467 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 [2024-06-10 12:13:06.579023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.468 12:13:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.850 12:13:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.850 12:13:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:02.850 12:13:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.850 12:13:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:02.850 12:13:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.760 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 [2024-06-10 12:13:10.238737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.761 12:13:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.147 12:13:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.147 12:13:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:06.147 12:13:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.147 12:13:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:06.147 12:13:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.689 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.690 [2024-06-10 12:13:13.897211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.690 12:13:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.071 12:13:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.071 12:13:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:10.071 12:13:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.071 12:13:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:10.071 12:13:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 [2024-06-10 12:13:17.558330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.980 12:13:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.891 12:13:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.891 12:13:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:13.891 12:13:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.891 12:13:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:13.891 12:13:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 [2024-06-10 12:13:21.231724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.838 12:13:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.219 12:13:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.219 12:13:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:17.219 12:13:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.219 12:13:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:17.220 12:13:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:19.133 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.393 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.393 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:19.393 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 [2024-06-10 12:13:24.905909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 [2024-06-10 12:13:24.966019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.394 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.655 12:13:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.655 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.655 12:13:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.655 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.655 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.655 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.655 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 [2024-06-10 12:13:25.030218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 [2024-06-10 12:13:25.086379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 [2024-06-10 12:13:25.146568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:19.656 "tick_rate": 2400000000, 00:09:19.656 "poll_groups": [ 00:09:19.656 { 00:09:19.656 "name": "nvmf_tgt_poll_group_000", 00:09:19.656 "admin_qpairs": 0, 00:09:19.656 "io_qpairs": 224, 00:09:19.656 "current_admin_qpairs": 0, 00:09:19.656 "current_io_qpairs": 0, 00:09:19.656 "pending_bdev_io": 0, 00:09:19.656 "completed_nvme_io": 518, 00:09:19.656 "transports": [ 00:09:19.656 { 00:09:19.656 "trtype": "TCP" 00:09:19.656 } 00:09:19.656 ] 00:09:19.656 }, 00:09:19.656 { 00:09:19.656 "name": "nvmf_tgt_poll_group_001", 00:09:19.656 "admin_qpairs": 1, 00:09:19.656 "io_qpairs": 223, 00:09:19.656 "current_admin_qpairs": 0, 00:09:19.656 "current_io_qpairs": 0, 00:09:19.656 "pending_bdev_io": 0, 00:09:19.656 "completed_nvme_io": 224, 00:09:19.656 "transports": [ 00:09:19.656 { 00:09:19.656 "trtype": "TCP" 00:09:19.656 } 00:09:19.656 ] 00:09:19.656 }, 00:09:19.656 { 00:09:19.656 "name": "nvmf_tgt_poll_group_002", 00:09:19.656 "admin_qpairs": 6, 00:09:19.656 "io_qpairs": 218, 00:09:19.656 "current_admin_qpairs": 0, 00:09:19.656 "current_io_qpairs": 0, 00:09:19.656 "pending_bdev_io": 0, 00:09:19.656 "completed_nvme_io": 219, 00:09:19.656 "transports": [ 00:09:19.656 { 00:09:19.656 "trtype": "TCP" 00:09:19.656 } 00:09:19.656 ] 00:09:19.656 }, 00:09:19.656 { 00:09:19.656 "name": "nvmf_tgt_poll_group_003", 00:09:19.656 "admin_qpairs": 0, 00:09:19.656 "io_qpairs": 224, 00:09:19.656 "current_admin_qpairs": 0, 00:09:19.656 "current_io_qpairs": 0, 00:09:19.656 "pending_bdev_io": 0, 00:09:19.656 "completed_nvme_io": 278, 00:09:19.656 "transports": [ 00:09:19.656 { 00:09:19.656 "trtype": "TCP" 00:09:19.656 } 00:09:19.656 ] 00:09:19.656 } 00:09:19.656 ] 00:09:19.656 }' 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:19.656 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:19.657 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.918 rmmod nvme_tcp 00:09:19.918 rmmod nvme_fabrics 00:09:19.918 rmmod nvme_keyring 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 489026 ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 489026 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 489026 ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 489026 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 489026 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 489026' 00:09:19.918 killing process with pid 489026 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 489026 00:09:19.918 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 489026 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.180 12:13:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.098 12:13:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.098 00:09:22.098 real 0m37.877s 00:09:22.098 user 1m51.551s 00:09:22.098 sys 0m7.582s 00:09:22.098 12:13:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.098 12:13:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.098 ************************************ 00:09:22.098 END TEST nvmf_rpc 00:09:22.098 ************************************ 00:09:22.098 12:13:27 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:22.098 12:13:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:22.098 12:13:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.098 12:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.360 ************************************ 00:09:22.360 START TEST nvmf_invalid 00:09:22.360 ************************************ 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:22.360 * Looking for test storage... 00:09:22.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.360 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.361 12:13:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:30.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:30.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:30.505 Found net devices under 0000:31:00.0: cvl_0_0 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.505 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:30.506 Found net devices under 0000:31:00.1: cvl_0_1 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.506 12:13:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:09:30.506 00:09:30.506 --- 10.0.0.2 ping statistics --- 00:09:30.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.506 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:09:30.506 00:09:30.506 --- 10.0.0.1 ping statistics --- 00:09:30.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.506 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=499241 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 499241 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 499241 ']' 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:30.506 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:30.767 [2024-06-10 12:13:36.119024] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:30.767 [2024-06-10 12:13:36.119070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.767 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.767 [2024-06-10 12:13:36.191674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.767 [2024-06-10 12:13:36.256872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.767 [2024-06-10 12:13:36.256909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.767 [2024-06-10 12:13:36.256917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.767 [2024-06-10 12:13:36.256927] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.767 [2024-06-10 12:13:36.256932] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.767 [2024-06-10 12:13:36.257070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.767 [2024-06-10 12:13:36.257207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.767 [2024-06-10 12:13:36.257315] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.767 [2024-06-10 12:13:36.257316] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.336 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:31.336 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:31.337 12:13:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2748 00:09:31.597 [2024-06-10 12:13:37.071140] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:31.597 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:31.597 { 00:09:31.597 "nqn": "nqn.2016-06.io.spdk:cnode2748", 00:09:31.597 "tgt_name": "foobar", 00:09:31.597 "method": "nvmf_create_subsystem", 00:09:31.597 "req_id": 1 00:09:31.597 } 00:09:31.597 Got JSON-RPC error response 00:09:31.597 response: 00:09:31.597 { 00:09:31.597 "code": -32603, 00:09:31.597 "message": "Unable to find target foobar" 00:09:31.597 }' 00:09:31.597 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:31.597 { 00:09:31.597 "nqn": "nqn.2016-06.io.spdk:cnode2748", 00:09:31.597 "tgt_name": "foobar", 00:09:31.598 "method": "nvmf_create_subsystem", 00:09:31.598 "req_id": 1 00:09:31.598 } 00:09:31.598 Got JSON-RPC error response 00:09:31.598 response: 00:09:31.598 { 00:09:31.598 "code": -32603, 00:09:31.598 "message": "Unable to find target foobar" 00:09:31.598 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:31.598 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:31.598 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18028 00:09:31.859 [2024-06-10 12:13:37.247727] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18028: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:31.859 { 00:09:31.859 "nqn": "nqn.2016-06.io.spdk:cnode18028", 00:09:31.859 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.859 "method": "nvmf_create_subsystem", 00:09:31.859 "req_id": 1 00:09:31.859 } 00:09:31.859 Got JSON-RPC error response 00:09:31.859 response: 00:09:31.859 { 00:09:31.859 "code": -32602, 00:09:31.859 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.859 }' 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:31.859 { 00:09:31.859 "nqn": "nqn.2016-06.io.spdk:cnode18028", 00:09:31.859 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.859 "method": "nvmf_create_subsystem", 00:09:31.859 "req_id": 1 00:09:31.859 } 00:09:31.859 Got JSON-RPC error response 00:09:31.859 response: 00:09:31.859 { 00:09:31.859 "code": -32602, 00:09:31.859 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.859 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32150 00:09:31.859 [2024-06-10 12:13:37.424326] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32150: invalid model number 'SPDK_Controller' 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:31.859 { 00:09:31.859 "nqn": "nqn.2016-06.io.spdk:cnode32150", 00:09:31.859 "model_number": "SPDK_Controller\u001f", 00:09:31.859 "method": "nvmf_create_subsystem", 00:09:31.859 "req_id": 1 00:09:31.859 } 00:09:31.859 Got JSON-RPC error response 00:09:31.859 response: 00:09:31.859 { 00:09:31.859 "code": -32602, 00:09:31.859 "message": "Invalid MN SPDK_Controller\u001f" 00:09:31.859 }' 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:31.859 { 00:09:31.859 "nqn": "nqn.2016-06.io.spdk:cnode32150", 00:09:31.859 "model_number": "SPDK_Controller\u001f", 00:09:31.859 "method": "nvmf_create_subsystem", 00:09:31.859 "req_id": 1 00:09:31.859 } 00:09:31.859 Got JSON-RPC error response 00:09:31.859 response: 00:09:31.859 { 00:09:31.859 "code": -32602, 00:09:31.859 "message": "Invalid MN SPDK_Controller\u001f" 00:09:31.859 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:31.859 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.121 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ',h LgzJW2u4No=N>+j Nq' 00:09:32.122 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ',h LgzJW2u4No=N>+j Nq' nqn.2016-06.io.spdk:cnode22416 00:09:32.384 [2024-06-10 12:13:37.757367] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22416: invalid serial number ',h LgzJW2u4No=N>+j Nq' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:32.384 { 00:09:32.384 "nqn": "nqn.2016-06.io.spdk:cnode22416", 00:09:32.384 "serial_number": ",h LgzJW2u4No=N>+j Nq", 00:09:32.384 "method": "nvmf_create_subsystem", 00:09:32.384 "req_id": 1 00:09:32.384 } 00:09:32.384 Got JSON-RPC error response 00:09:32.384 response: 00:09:32.384 { 00:09:32.384 "code": -32602, 00:09:32.384 "message": "Invalid SN ,h LgzJW2u4No=N>+j Nq" 00:09:32.384 }' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:32.384 { 00:09:32.384 "nqn": "nqn.2016-06.io.spdk:cnode22416", 00:09:32.384 "serial_number": ",h LgzJW2u4No=N>+j Nq", 00:09:32.384 "method": "nvmf_create_subsystem", 00:09:32.384 "req_id": 1 00:09:32.384 } 00:09:32.384 Got JSON-RPC error response 00:09:32.384 response: 00:09:32.384 { 00:09:32.384 "code": -32602, 00:09:32.384 "message": "Invalid SN ,h LgzJW2u4No=N>+j Nq" 00:09:32.384 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.384 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.385 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:32.645 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wR`1*sSxEt1hvn7MVQ7I<4/h&.wGST_jE;KmI7*\' 00:09:32.646 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'wR`1*sSxEt1hvn7MVQ7I<4/h&.wGST_jE;KmI7*\' nqn.2016-06.io.spdk:cnode32401 00:09:32.646 [2024-06-10 12:13:38.238907] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32401: invalid model number 'wR`1*sSxEt1hvn7MVQ7I<4/h&.wGST_jE;KmI7*\' 00:09:32.906 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:32.906 { 00:09:32.906 "nqn": "nqn.2016-06.io.spdk:cnode32401", 00:09:32.906 "model_number": "wR`1*sSxEt1hvn7M\u007fVQ7I<4/h&.wGST_jE;KmI7*\\", 00:09:32.906 "method": "nvmf_create_subsystem", 00:09:32.906 "req_id": 1 00:09:32.906 } 00:09:32.906 Got JSON-RPC error response 00:09:32.906 response: 00:09:32.906 { 00:09:32.906 "code": -32602, 00:09:32.906 "message": "Invalid MN wR`1*sSxEt1hvn7M\u007fVQ7I<4/h&.wGST_jE;KmI7*\\" 00:09:32.906 }' 00:09:32.906 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:32.906 { 00:09:32.906 "nqn": "nqn.2016-06.io.spdk:cnode32401", 00:09:32.906 "model_number": "wR`1*sSxEt1hvn7M\u007fVQ7I<4/h&.wGST_jE;KmI7*\\", 00:09:32.906 "method": "nvmf_create_subsystem", 00:09:32.906 "req_id": 1 00:09:32.906 } 00:09:32.906 Got JSON-RPC error response 00:09:32.906 response: 00:09:32.906 { 00:09:32.906 "code": -32602, 00:09:32.906 "message": "Invalid MN wR`1*sSxEt1hvn7M\u007fVQ7I<4/h&.wGST_jE;KmI7*\\" 00:09:32.906 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:32.906 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:32.906 [2024-06-10 12:13:38.411525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.906 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:33.166 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:33.166 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:33.166 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:33.166 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:33.166 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:33.166 [2024-06-10 12:13:38.762091] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:33.426 { 00:09:33.426 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.426 "listen_address": { 00:09:33.426 "trtype": "tcp", 00:09:33.426 "traddr": "", 00:09:33.426 "trsvcid": "4421" 00:09:33.426 }, 00:09:33.426 "method": "nvmf_subsystem_remove_listener", 00:09:33.426 "req_id": 1 00:09:33.426 } 00:09:33.426 Got JSON-RPC error response 00:09:33.426 response: 00:09:33.426 { 00:09:33.426 "code": -32602, 00:09:33.426 "message": "Invalid parameters" 00:09:33.426 }' 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:33.426 { 00:09:33.426 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.426 "listen_address": { 00:09:33.426 "trtype": "tcp", 00:09:33.426 "traddr": "", 00:09:33.426 "trsvcid": "4421" 00:09:33.426 }, 00:09:33.426 "method": "nvmf_subsystem_remove_listener", 00:09:33.426 "req_id": 1 00:09:33.426 } 00:09:33.426 Got JSON-RPC error response 00:09:33.426 response: 00:09:33.426 { 00:09:33.426 "code": -32602, 00:09:33.426 "message": "Invalid parameters" 00:09:33.426 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11710 -i 0 00:09:33.426 [2024-06-10 12:13:38.934617] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11710: invalid cntlid range [0-65519] 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:33.426 { 00:09:33.426 "nqn": "nqn.2016-06.io.spdk:cnode11710", 00:09:33.426 "min_cntlid": 0, 00:09:33.426 "method": "nvmf_create_subsystem", 00:09:33.426 "req_id": 1 00:09:33.426 } 00:09:33.426 Got JSON-RPC error response 00:09:33.426 response: 00:09:33.426 { 00:09:33.426 "code": -32602, 00:09:33.426 "message": "Invalid cntlid range [0-65519]" 00:09:33.426 }' 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:33.426 { 00:09:33.426 "nqn": "nqn.2016-06.io.spdk:cnode11710", 00:09:33.426 "min_cntlid": 0, 00:09:33.426 "method": "nvmf_create_subsystem", 00:09:33.426 "req_id": 1 00:09:33.426 } 00:09:33.426 Got JSON-RPC error response 00:09:33.426 response: 00:09:33.426 { 00:09:33.426 "code": -32602, 00:09:33.426 "message": "Invalid cntlid range [0-65519]" 00:09:33.426 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.426 12:13:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17874 -i 65520 00:09:33.686 [2024-06-10 12:13:39.107181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17874: invalid cntlid range [65520-65519] 00:09:33.686 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:33.686 { 00:09:33.686 "nqn": "nqn.2016-06.io.spdk:cnode17874", 00:09:33.686 "min_cntlid": 65520, 00:09:33.686 "method": "nvmf_create_subsystem", 00:09:33.686 "req_id": 1 00:09:33.686 } 00:09:33.686 Got JSON-RPC error response 00:09:33.686 response: 00:09:33.686 { 00:09:33.686 "code": -32602, 00:09:33.686 "message": "Invalid cntlid range [65520-65519]" 00:09:33.686 }' 00:09:33.686 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:33.686 { 00:09:33.686 "nqn": "nqn.2016-06.io.spdk:cnode17874", 00:09:33.686 "min_cntlid": 65520, 00:09:33.686 "method": "nvmf_create_subsystem", 00:09:33.686 "req_id": 1 00:09:33.686 } 00:09:33.686 Got JSON-RPC error response 00:09:33.686 response: 00:09:33.686 { 00:09:33.686 "code": -32602, 00:09:33.686 "message": "Invalid cntlid range [65520-65519]" 00:09:33.687 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.687 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10116 -I 0 00:09:33.687 [2024-06-10 12:13:39.279735] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10116: invalid cntlid range [1-0] 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:33.947 { 00:09:33.947 "nqn": "nqn.2016-06.io.spdk:cnode10116", 00:09:33.947 "max_cntlid": 0, 00:09:33.947 "method": "nvmf_create_subsystem", 00:09:33.947 "req_id": 1 00:09:33.947 } 00:09:33.947 Got JSON-RPC error response 00:09:33.947 response: 00:09:33.947 { 00:09:33.947 "code": -32602, 00:09:33.947 "message": "Invalid cntlid range [1-0]" 00:09:33.947 }' 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:33.947 { 00:09:33.947 "nqn": "nqn.2016-06.io.spdk:cnode10116", 00:09:33.947 "max_cntlid": 0, 00:09:33.947 "method": "nvmf_create_subsystem", 00:09:33.947 "req_id": 1 00:09:33.947 } 00:09:33.947 Got JSON-RPC error response 00:09:33.947 response: 00:09:33.947 { 00:09:33.947 "code": -32602, 00:09:33.947 "message": "Invalid cntlid range [1-0]" 00:09:33.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17510 -I 65520 00:09:33.947 [2024-06-10 12:13:39.444266] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17510: invalid cntlid range [1-65520] 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:33.947 { 00:09:33.947 "nqn": "nqn.2016-06.io.spdk:cnode17510", 00:09:33.947 "max_cntlid": 65520, 00:09:33.947 "method": "nvmf_create_subsystem", 00:09:33.947 "req_id": 1 00:09:33.947 } 00:09:33.947 Got JSON-RPC error response 00:09:33.947 response: 00:09:33.947 { 00:09:33.947 "code": -32602, 00:09:33.947 "message": "Invalid cntlid range [1-65520]" 00:09:33.947 }' 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:33.947 { 00:09:33.947 "nqn": "nqn.2016-06.io.spdk:cnode17510", 00:09:33.947 "max_cntlid": 65520, 00:09:33.947 "method": "nvmf_create_subsystem", 00:09:33.947 "req_id": 1 00:09:33.947 } 00:09:33.947 Got JSON-RPC error response 00:09:33.947 response: 00:09:33.947 { 00:09:33.947 "code": -32602, 00:09:33.947 "message": "Invalid cntlid range [1-65520]" 00:09:33.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.947 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10358 -i 6 -I 5 00:09:34.207 [2024-06-10 12:13:39.608797] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10358: invalid cntlid range [6-5] 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:34.207 { 00:09:34.207 "nqn": "nqn.2016-06.io.spdk:cnode10358", 00:09:34.207 "min_cntlid": 6, 00:09:34.207 "max_cntlid": 5, 00:09:34.207 "method": "nvmf_create_subsystem", 00:09:34.207 "req_id": 1 00:09:34.207 } 00:09:34.207 Got JSON-RPC error response 00:09:34.207 response: 00:09:34.207 { 00:09:34.207 "code": -32602, 00:09:34.207 "message": "Invalid cntlid range [6-5]" 00:09:34.207 }' 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:34.207 { 00:09:34.207 "nqn": "nqn.2016-06.io.spdk:cnode10358", 00:09:34.207 "min_cntlid": 6, 00:09:34.207 "max_cntlid": 5, 00:09:34.207 "method": "nvmf_create_subsystem", 00:09:34.207 "req_id": 1 00:09:34.207 } 00:09:34.207 Got JSON-RPC error response 00:09:34.207 response: 00:09:34.207 { 00:09:34.207 "code": -32602, 00:09:34.207 "message": "Invalid cntlid range [6-5]" 00:09:34.207 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:34.207 { 00:09:34.207 "name": "foobar", 00:09:34.207 "method": "nvmf_delete_target", 00:09:34.207 "req_id": 1 00:09:34.207 } 00:09:34.207 Got JSON-RPC error response 00:09:34.207 response: 00:09:34.207 { 00:09:34.207 "code": -32602, 00:09:34.207 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:34.207 }' 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:34.207 { 00:09:34.207 "name": "foobar", 00:09:34.207 "method": "nvmf_delete_target", 00:09:34.207 "req_id": 1 00:09:34.207 } 00:09:34.207 Got JSON-RPC error response 00:09:34.207 response: 00:09:34.207 { 00:09:34.207 "code": -32602, 00:09:34.207 "message": "The specified target doesn't exist, cannot delete it." 00:09:34.207 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.207 rmmod nvme_tcp 00:09:34.207 rmmod nvme_fabrics 00:09:34.207 rmmod nvme_keyring 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 499241 ']' 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 499241 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 499241 ']' 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 499241 00:09:34.207 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:34.468 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 499241 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 499241' 00:09:34.469 killing process with pid 499241 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 499241 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 499241 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.469 12:13:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.052 12:13:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.052 00:09:37.052 real 0m14.359s 00:09:37.052 user 0m19.436s 00:09:37.052 sys 0m6.934s 00:09:37.052 12:13:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:37.052 12:13:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:37.052 ************************************ 00:09:37.052 END TEST nvmf_invalid 00:09:37.052 ************************************ 00:09:37.052 12:13:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.052 12:13:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:37.052 12:13:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:37.052 12:13:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.052 ************************************ 00:09:37.052 START TEST nvmf_abort 00:09:37.052 ************************************ 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.052 * Looking for test storage... 00:09:37.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.052 12:13:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.193 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.193 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.193 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.193 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.201 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:45.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:45.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:45.202 Found net devices under 0000:31:00.0: cvl_0_0 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:45.202 Found net devices under 0000:31:00.1: cvl_0_1 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.202 12:13:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:09:45.202 00:09:45.202 --- 10.0.0.2 ping statistics --- 00:09:45.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.202 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:09:45.202 00:09:45.202 --- 10.0.0.1 ping statistics --- 00:09:45.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.202 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=504830 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 504830 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 504830 ']' 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:45.202 12:13:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.202 [2024-06-10 12:13:50.340305] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:45.203 [2024-06-10 12:13:50.340367] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.203 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.203 [2024-06-10 12:13:50.425474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.203 [2024-06-10 12:13:50.520920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.203 [2024-06-10 12:13:50.520982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.203 [2024-06-10 12:13:50.520991] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.203 [2024-06-10 12:13:50.520997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.203 [2024-06-10 12:13:50.521003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.203 [2024-06-10 12:13:50.521142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.203 [2024-06-10 12:13:50.523222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.203 [2024-06-10 12:13:50.523251] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.775 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:45.775 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:09:45.775 12:13:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.775 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:45.775 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 [2024-06-10 12:13:51.169303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 Malloc0 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 Delay0 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 [2024-06-10 12:13:51.246542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:45.776 12:13:51 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:45.776 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.776 [2024-06-10 12:13:51.355682] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:48.332 Initializing NVMe Controllers 00:09:48.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:48.332 controller IO queue size 128 less than required 00:09:48.332 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:48.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:48.332 Initialization complete. Launching workers. 00:09:48.332 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34564 00:09:48.332 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34629, failed to submit 62 00:09:48.332 success 34568, unsuccess 61, failed 0 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.332 rmmod nvme_tcp 00:09:48.332 rmmod nvme_fabrics 00:09:48.332 rmmod nvme_keyring 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:48.332 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 504830 ']' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 504830 ']' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 504830' 00:09:48.333 killing process with pid 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 504830 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.333 12:13:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.875 12:13:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.875 00:09:50.875 real 0m13.779s 00:09:50.875 user 0m14.115s 00:09:50.875 sys 0m6.773s 00:09:50.875 12:13:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:50.875 12:13:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.875 ************************************ 00:09:50.875 END TEST nvmf_abort 00:09:50.875 ************************************ 00:09:50.875 12:13:55 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:50.875 12:13:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:50.875 12:13:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:50.875 12:13:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.875 ************************************ 00:09:50.875 START TEST nvmf_ns_hotplug_stress 00:09:50.875 ************************************ 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:50.875 * Looking for test storage... 00:09:50.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.875 12:13:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.007 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:59.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:59.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:59.008 Found net devices under 0000:31:00.0: cvl_0_0 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:59.008 Found net devices under 0000:31:00.1: cvl_0_1 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.008 12:14:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:09:59.008 00:09:59.008 --- 10.0.0.2 ping statistics --- 00:09:59.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.008 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:09:59.008 00:09:59.008 --- 10.0.0.1 ping statistics --- 00:09:59.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.008 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=510279 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 510279 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 510279 ']' 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:59.008 12:14:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.008 [2024-06-10 12:14:04.342633] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:59.008 [2024-06-10 12:14:04.342693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.008 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.008 [2024-06-10 12:14:04.439008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.008 [2024-06-10 12:14:04.531583] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.008 [2024-06-10 12:14:04.531636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.008 [2024-06-10 12:14:04.531644] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.008 [2024-06-10 12:14:04.531651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.008 [2024-06-10 12:14:04.531658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.009 [2024-06-10 12:14:04.531997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.009 [2024-06-10 12:14:04.532130] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.009 [2024-06-10 12:14:04.532131] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.577 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:59.577 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:59.578 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.837 [2024-06-10 12:14:05.300398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.837 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.098 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.098 [2024-06-10 12:14:05.633846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.098 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.358 12:14:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:00.618 Malloc0 00:10:00.618 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.618 Delay0 00:10:00.618 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.879 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:00.879 NULL1 00:10:01.138 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:01.138 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:01.138 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=510833 00:10:01.138 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:01.138 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.398 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.398 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:01.398 12:14:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:01.660 [2024-06-10 12:14:07.111066] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:01.660 true 00:10:01.660 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:01.660 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.920 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.920 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:01.920 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:02.179 true 00:10:02.179 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:02.179 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.439 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.439 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:02.439 12:14:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:02.700 true 00:10:02.700 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:02.700 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.700 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.960 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:02.960 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:03.220 true 00:10:03.220 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:03.220 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.220 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.481 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:03.481 12:14:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:03.757 true 00:10:03.757 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:03.757 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.757 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.065 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:04.065 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:04.065 true 00:10:04.065 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:04.065 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.325 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.585 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:04.585 12:14:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:04.585 true 00:10:04.585 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:04.585 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.844 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.104 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:05.104 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:05.104 true 00:10:05.104 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:05.104 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.364 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.624 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:05.624 12:14:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:05.624 true 00:10:05.624 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:05.624 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.885 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.885 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:05.885 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:06.145 true 00:10:06.145 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:06.145 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.404 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.404 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:06.404 12:14:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:06.666 true 00:10:06.666 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:06.666 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.928 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.928 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:06.928 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:07.188 true 00:10:07.188 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:07.188 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.448 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.448 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:07.448 12:14:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:07.708 true 00:10:07.708 12:14:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:07.708 12:14:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.649 Read completed with error (sct=0, sc=11) 00:10:08.649 12:14:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.649 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:08.649 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:08.910 true 00:10:08.910 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:08.910 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.910 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.170 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:09.170 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:09.431 true 00:10:09.431 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:09.431 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.431 12:14:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.690 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:09.690 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:09.690 true 00:10:09.690 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:09.690 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.951 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.211 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:10.211 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:10.211 true 00:10:10.211 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:10.211 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.471 12:14:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.752 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:10.752 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:10.752 true 00:10:10.752 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:10.752 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.013 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.013 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:11.013 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:11.273 true 00:10:11.273 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:11.273 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.533 12:14:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.533 12:14:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:11.533 12:14:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:11.794 true 00:10:11.794 12:14:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:11.794 12:14:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.734 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.734 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:12.734 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:12.994 true 00:10:12.994 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:12.994 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.254 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.254 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:13.254 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:13.515 true 00:10:13.515 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:13.515 12:14:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.515 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.776 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:13.776 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:14.037 true 00:10:14.037 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:14.037 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.037 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.297 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:14.297 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:14.297 true 00:10:14.558 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:14.558 12:14:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.558 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.819 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:14.819 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:15.080 true 00:10:15.080 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:15.080 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.080 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.340 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:15.340 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:15.340 true 00:10:15.601 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:15.601 12:14:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.601 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.861 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:15.861 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:15.861 true 00:10:15.861 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:15.861 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.120 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.381 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:16.381 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:16.381 true 00:10:16.381 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:16.381 12:14:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.650 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.915 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:16.915 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:16.915 true 00:10:16.915 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:16.916 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.176 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.176 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:17.176 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:17.445 true 00:10:17.445 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:17.445 12:14:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.760 12:14:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.760 12:14:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:17.760 12:14:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:18.022 true 00:10:18.022 12:14:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:18.022 12:14:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.966 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.966 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:18.966 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:19.227 true 00:10:19.227 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:19.227 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.227 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.488 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:19.488 12:14:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:19.488 true 00:10:19.749 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:19.749 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.749 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.010 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:20.010 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:20.010 true 00:10:20.010 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:20.010 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.271 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.531 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:20.531 12:14:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:20.531 true 00:10:20.531 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:20.531 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.792 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.062 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:21.062 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:21.062 true 00:10:21.062 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:21.062 12:14:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.003 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.263 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:22.263 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:22.263 true 00:10:22.263 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:22.263 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.524 12:14:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.524 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:22.524 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:22.784 true 00:10:22.784 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:22.784 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.044 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.044 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:23.044 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:23.304 true 00:10:23.304 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:23.304 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.563 12:14:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.563 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:23.563 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:23.824 true 00:10:23.824 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:23.824 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.098 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.098 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:24.098 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:24.358 true 00:10:24.358 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:24.358 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.619 12:14:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.619 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:24.619 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:24.880 true 00:10:24.880 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:24.880 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.140 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.140 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:25.140 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:25.401 true 00:10:25.401 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:25.401 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.401 12:14:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.662 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:25.662 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:25.922 true 00:10:25.922 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:25.922 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.922 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.182 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:26.182 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:26.182 true 00:10:26.443 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:26.443 12:14:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.386 12:14:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.386 12:14:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:27.386 12:14:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:27.386 true 00:10:27.647 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:27.647 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.647 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.908 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:27.908 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:27.908 true 00:10:27.908 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:27.908 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.168 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.427 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:28.427 12:14:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:28.427 true 00:10:28.427 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:28.427 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.687 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.946 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:28.946 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:28.946 true 00:10:28.946 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:28.946 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.206 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.465 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:29.465 12:14:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:29.465 true 00:10:29.465 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:29.465 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.726 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.985 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:29.985 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:29.985 true 00:10:29.985 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:29.986 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.245 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.245 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:30.245 12:14:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:30.506 true 00:10:30.506 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:30.506 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.767 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.767 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:30.767 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:31.027 true 00:10:31.027 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:31.027 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.286 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.286 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:31.286 12:14:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:31.546 true 00:10:31.546 12:14:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:31.546 12:14:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.543 Initializing NVMe Controllers 00:10:32.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.543 Controller IO queue size 128, less than required. 00:10:32.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.543 Controller IO queue size 128, less than required. 00:10:32.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:32.543 Initialization complete. Launching workers. 00:10:32.543 ======================================================== 00:10:32.543 Latency(us) 00:10:32.543 Device Information : IOPS MiB/s Average min max 00:10:32.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 184.93 0.09 149271.65 2513.91 1053139.55 00:10:32.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5880.13 2.87 21696.36 1649.56 498924.13 00:10:32.543 ======================================================== 00:10:32.544 Total : 6065.06 2.96 25586.33 1649.56 1053139.55 00:10:32.544 00:10:32.544 12:14:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.544 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:32.544 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:32.804 true 00:10:32.804 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 510833 00:10:32.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (510833) - No such process 00:10:32.804 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 510833 00:10:32.804 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.804 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.064 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:33.064 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:33.064 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:33.064 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.064 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:33.324 null0 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:33.324 null1 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.324 12:14:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:33.583 null2 00:10:33.583 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.583 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.583 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:33.583 null3 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:33.844 null4 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.844 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:34.105 null5 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:34.105 null6 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.105 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:34.366 null7 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.366 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 517594 517596 517599 517602 517605 517608 517610 517612 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.367 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.627 12:14:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.628 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.888 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.888 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.889 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.148 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.149 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.149 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.149 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.149 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.149 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.408 12:14:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.667 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.927 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.187 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.447 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.447 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.448 12:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.448 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.707 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.708 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.967 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.968 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.228 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.228 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.228 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.228 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.228 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.229 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.490 12:14:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.490 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.751 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.012 rmmod nvme_tcp 00:10:38.012 rmmod nvme_fabrics 00:10:38.012 rmmod nvme_keyring 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 510279 ']' 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 510279 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 510279 ']' 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 510279 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 510279 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 510279' 00:10:38.012 killing process with pid 510279 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 510279 00:10:38.012 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 510279 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.273 12:14:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.190 12:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.190 00:10:40.190 real 0m49.699s 00:10:40.190 user 3m16.782s 00:10:40.190 sys 0m16.182s 00:10:40.190 12:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:40.190 12:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.190 ************************************ 00:10:40.190 END TEST nvmf_ns_hotplug_stress 00:10:40.190 ************************************ 00:10:40.190 12:14:45 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:40.190 12:14:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:40.190 12:14:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:40.190 12:14:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.190 ************************************ 00:10:40.190 START TEST nvmf_connect_stress 00:10:40.190 ************************************ 00:10:40.190 12:14:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:40.452 * Looking for test storage... 00:10:40.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.452 12:14:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:48.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:48.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:48.599 Found net devices under 0000:31:00.0: cvl_0_0 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:48.599 Found net devices under 0000:31:00.1: cvl_0_1 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.599 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.600 12:14:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:10:48.600 00:10:48.600 --- 10.0.0.2 ping statistics --- 00:10:48.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.600 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:10:48.600 00:10:48.600 --- 10.0.0.1 ping statistics --- 00:10:48.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.600 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=523215 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 523215 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 523215 ']' 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:48.600 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.600 [2024-06-10 12:14:54.117407] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:10:48.600 [2024-06-10 12:14:54.117470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.600 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.862 [2024-06-10 12:14:54.212627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.862 [2024-06-10 12:14:54.306096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.862 [2024-06-10 12:14:54.306154] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.862 [2024-06-10 12:14:54.306162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.862 [2024-06-10 12:14:54.306169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.862 [2024-06-10 12:14:54.306175] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.862 [2024-06-10 12:14:54.306308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.862 [2024-06-10 12:14:54.306602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.862 [2024-06-10 12:14:54.306605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 [2024-06-10 12:14:54.932902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 [2024-06-10 12:14:54.970326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 NULL1 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=523290 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.434 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.698 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.959 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.959 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:49.959 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.959 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.959 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.221 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.221 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:50.221 12:14:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.221 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.221 12:14:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.482 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:50.482 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.482 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.482 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.055 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:51.055 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.055 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.055 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.316 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.316 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:51.316 12:14:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.316 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.316 12:14:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.577 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:51.577 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.577 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.577 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.882 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.882 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:51.882 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.882 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.882 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.161 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.161 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:52.161 12:14:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.161 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.161 12:14:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.422 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.422 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:52.422 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.422 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.422 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.994 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.994 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:52.994 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.994 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.994 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.256 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:53.256 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.256 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.256 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.517 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.517 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:53.517 12:14:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.517 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.517 12:14:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.778 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.778 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:53.778 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.778 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.778 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.039 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.039 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:54.039 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.039 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.039 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.611 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.611 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:54.611 12:14:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.611 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.611 12:14:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.872 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.872 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:54.872 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.872 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.872 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.132 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.132 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:55.132 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.132 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.132 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.393 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.394 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:55.394 12:15:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.394 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.394 12:15:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.965 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.965 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:55.965 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.965 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.965 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.226 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.226 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:56.226 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.226 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.226 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.486 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.486 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:56.486 12:15:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.486 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.486 12:15:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.748 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.748 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:56.748 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.748 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.748 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.007 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.007 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:57.007 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.007 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.007 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.577 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.577 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:57.577 12:15:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.577 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.577 12:15:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.839 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.839 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:57.839 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.839 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.839 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.101 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.101 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:58.101 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.101 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.101 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.363 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.363 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:58.363 12:15:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.363 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.363 12:15:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.625 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.625 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:58.625 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.625 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.625 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.196 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.196 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:59.196 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.196 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.196 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.457 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.457 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:59.457 12:15:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.457 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.457 12:15:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.718 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 523290 00:10:59.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (523290) - No such process 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 523290 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.718 rmmod nvme_tcp 00:10:59.718 rmmod nvme_fabrics 00:10:59.718 rmmod nvme_keyring 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 523215 ']' 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 523215 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 523215 ']' 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 523215 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 523215 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 523215' 00:10:59.718 killing process with pid 523215 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 523215 00:10:59.718 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 523215 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.979 12:15:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.938 12:15:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.938 00:11:01.938 real 0m21.711s 00:11:01.938 user 0m42.285s 00:11:01.938 sys 0m9.291s 00:11:01.938 12:15:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:01.938 12:15:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.938 ************************************ 00:11:01.938 END TEST nvmf_connect_stress 00:11:01.938 ************************************ 00:11:01.938 12:15:07 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:01.938 12:15:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:01.938 12:15:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:01.938 12:15:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.199 ************************************ 00:11:02.199 START TEST nvmf_fused_ordering 00:11:02.199 ************************************ 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.199 * Looking for test storage... 00:11:02.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.199 12:15:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.200 12:15:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.345 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.345 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.345 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.345 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:10.346 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:10.346 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:10.346 Found net devices under 0000:31:00.0: cvl_0_0 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:10.346 Found net devices under 0000:31:00.1: cvl_0_1 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:11:10.346 00:11:10.346 --- 10.0.0.2 ping statistics --- 00:11:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.346 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:11:10.346 00:11:10.346 --- 10.0.0.1 ping statistics --- 00:11:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.346 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=530642 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 530642 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:10.346 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 530642 ']' 00:11:10.347 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.347 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:10.347 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.347 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:10.347 12:15:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.347 [2024-06-10 12:15:15.921626] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:10.347 [2024-06-10 12:15:15.921687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.617 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.617 [2024-06-10 12:15:16.021427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.617 [2024-06-10 12:15:16.115180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.617 [2024-06-10 12:15:16.115251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.617 [2024-06-10 12:15:16.115260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.617 [2024-06-10 12:15:16.115267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.617 [2024-06-10 12:15:16.115273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.617 [2024-06-10 12:15:16.115309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.192 [2024-06-10 12:15:16.755638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.192 [2024-06-10 12:15:16.779913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.192 NULL1 00:11:11.192 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.453 12:15:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:11.453 [2024-06-10 12:15:16.850753] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:11.453 [2024-06-10 12:15:16.850819] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530857 ] 00:11:11.453 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.715 Attached to nqn.2016-06.io.spdk:cnode1 00:11:11.715 Namespace ID: 1 size: 1GB 00:11:11.715 fused_ordering(0) 00:11:11.715 fused_ordering(1) 00:11:11.715 fused_ordering(2) 00:11:11.715 fused_ordering(3) 00:11:11.715 fused_ordering(4) 00:11:11.715 fused_ordering(5) 00:11:11.715 fused_ordering(6) 00:11:11.715 fused_ordering(7) 00:11:11.715 fused_ordering(8) 00:11:11.715 fused_ordering(9) 00:11:11.715 fused_ordering(10) 00:11:11.715 fused_ordering(11) 00:11:11.715 fused_ordering(12) 00:11:11.715 fused_ordering(13) 00:11:11.715 fused_ordering(14) 00:11:11.715 fused_ordering(15) 00:11:11.715 fused_ordering(16) 00:11:11.715 fused_ordering(17) 00:11:11.715 fused_ordering(18) 00:11:11.715 fused_ordering(19) 00:11:11.715 fused_ordering(20) 00:11:11.715 fused_ordering(21) 00:11:11.715 fused_ordering(22) 00:11:11.715 fused_ordering(23) 00:11:11.715 fused_ordering(24) 00:11:11.715 fused_ordering(25) 00:11:11.715 fused_ordering(26) 00:11:11.715 fused_ordering(27) 00:11:11.715 fused_ordering(28) 00:11:11.715 fused_ordering(29) 00:11:11.715 fused_ordering(30) 00:11:11.715 fused_ordering(31) 00:11:11.715 fused_ordering(32) 00:11:11.715 fused_ordering(33) 00:11:11.715 fused_ordering(34) 00:11:11.715 fused_ordering(35) 00:11:11.715 fused_ordering(36) 00:11:11.715 fused_ordering(37) 00:11:11.715 fused_ordering(38) 00:11:11.715 fused_ordering(39) 00:11:11.715 fused_ordering(40) 00:11:11.715 fused_ordering(41) 00:11:11.715 fused_ordering(42) 00:11:11.715 fused_ordering(43) 00:11:11.715 fused_ordering(44) 00:11:11.715 fused_ordering(45) 00:11:11.715 fused_ordering(46) 00:11:11.715 fused_ordering(47) 00:11:11.715 fused_ordering(48) 00:11:11.715 fused_ordering(49) 00:11:11.715 fused_ordering(50) 00:11:11.715 fused_ordering(51) 00:11:11.715 fused_ordering(52) 00:11:11.715 fused_ordering(53) 00:11:11.715 fused_ordering(54) 00:11:11.715 fused_ordering(55) 00:11:11.715 fused_ordering(56) 00:11:11.715 fused_ordering(57) 00:11:11.715 fused_ordering(58) 00:11:11.715 fused_ordering(59) 00:11:11.715 fused_ordering(60) 00:11:11.715 fused_ordering(61) 00:11:11.715 fused_ordering(62) 00:11:11.715 fused_ordering(63) 00:11:11.715 fused_ordering(64) 00:11:11.715 fused_ordering(65) 00:11:11.715 fused_ordering(66) 00:11:11.715 fused_ordering(67) 00:11:11.715 fused_ordering(68) 00:11:11.715 fused_ordering(69) 00:11:11.715 fused_ordering(70) 00:11:11.715 fused_ordering(71) 00:11:11.715 fused_ordering(72) 00:11:11.715 fused_ordering(73) 00:11:11.715 fused_ordering(74) 00:11:11.715 fused_ordering(75) 00:11:11.715 fused_ordering(76) 00:11:11.715 fused_ordering(77) 00:11:11.715 fused_ordering(78) 00:11:11.715 fused_ordering(79) 00:11:11.715 fused_ordering(80) 00:11:11.715 fused_ordering(81) 00:11:11.715 fused_ordering(82) 00:11:11.715 fused_ordering(83) 00:11:11.715 fused_ordering(84) 00:11:11.715 fused_ordering(85) 00:11:11.715 fused_ordering(86) 00:11:11.715 fused_ordering(87) 00:11:11.715 fused_ordering(88) 00:11:11.715 fused_ordering(89) 00:11:11.715 fused_ordering(90) 00:11:11.715 fused_ordering(91) 00:11:11.715 fused_ordering(92) 00:11:11.715 fused_ordering(93) 00:11:11.715 fused_ordering(94) 00:11:11.715 fused_ordering(95) 00:11:11.715 fused_ordering(96) 00:11:11.715 fused_ordering(97) 00:11:11.715 fused_ordering(98) 00:11:11.715 fused_ordering(99) 00:11:11.715 fused_ordering(100) 00:11:11.715 fused_ordering(101) 00:11:11.715 fused_ordering(102) 00:11:11.715 fused_ordering(103) 00:11:11.715 fused_ordering(104) 00:11:11.715 fused_ordering(105) 00:11:11.715 fused_ordering(106) 00:11:11.715 fused_ordering(107) 00:11:11.715 fused_ordering(108) 00:11:11.715 fused_ordering(109) 00:11:11.715 fused_ordering(110) 00:11:11.715 fused_ordering(111) 00:11:11.715 fused_ordering(112) 00:11:11.715 fused_ordering(113) 00:11:11.715 fused_ordering(114) 00:11:11.715 fused_ordering(115) 00:11:11.715 fused_ordering(116) 00:11:11.715 fused_ordering(117) 00:11:11.715 fused_ordering(118) 00:11:11.715 fused_ordering(119) 00:11:11.715 fused_ordering(120) 00:11:11.715 fused_ordering(121) 00:11:11.715 fused_ordering(122) 00:11:11.715 fused_ordering(123) 00:11:11.715 fused_ordering(124) 00:11:11.715 fused_ordering(125) 00:11:11.715 fused_ordering(126) 00:11:11.715 fused_ordering(127) 00:11:11.715 fused_ordering(128) 00:11:11.715 fused_ordering(129) 00:11:11.716 fused_ordering(130) 00:11:11.716 fused_ordering(131) 00:11:11.716 fused_ordering(132) 00:11:11.716 fused_ordering(133) 00:11:11.716 fused_ordering(134) 00:11:11.716 fused_ordering(135) 00:11:11.716 fused_ordering(136) 00:11:11.716 fused_ordering(137) 00:11:11.716 fused_ordering(138) 00:11:11.716 fused_ordering(139) 00:11:11.716 fused_ordering(140) 00:11:11.716 fused_ordering(141) 00:11:11.716 fused_ordering(142) 00:11:11.716 fused_ordering(143) 00:11:11.716 fused_ordering(144) 00:11:11.716 fused_ordering(145) 00:11:11.716 fused_ordering(146) 00:11:11.716 fused_ordering(147) 00:11:11.716 fused_ordering(148) 00:11:11.716 fused_ordering(149) 00:11:11.716 fused_ordering(150) 00:11:11.716 fused_ordering(151) 00:11:11.716 fused_ordering(152) 00:11:11.716 fused_ordering(153) 00:11:11.716 fused_ordering(154) 00:11:11.716 fused_ordering(155) 00:11:11.716 fused_ordering(156) 00:11:11.716 fused_ordering(157) 00:11:11.716 fused_ordering(158) 00:11:11.716 fused_ordering(159) 00:11:11.716 fused_ordering(160) 00:11:11.716 fused_ordering(161) 00:11:11.716 fused_ordering(162) 00:11:11.716 fused_ordering(163) 00:11:11.716 fused_ordering(164) 00:11:11.716 fused_ordering(165) 00:11:11.716 fused_ordering(166) 00:11:11.716 fused_ordering(167) 00:11:11.716 fused_ordering(168) 00:11:11.716 fused_ordering(169) 00:11:11.716 fused_ordering(170) 00:11:11.716 fused_ordering(171) 00:11:11.716 fused_ordering(172) 00:11:11.716 fused_ordering(173) 00:11:11.716 fused_ordering(174) 00:11:11.716 fused_ordering(175) 00:11:11.716 fused_ordering(176) 00:11:11.716 fused_ordering(177) 00:11:11.716 fused_ordering(178) 00:11:11.716 fused_ordering(179) 00:11:11.716 fused_ordering(180) 00:11:11.716 fused_ordering(181) 00:11:11.716 fused_ordering(182) 00:11:11.716 fused_ordering(183) 00:11:11.716 fused_ordering(184) 00:11:11.716 fused_ordering(185) 00:11:11.716 fused_ordering(186) 00:11:11.716 fused_ordering(187) 00:11:11.716 fused_ordering(188) 00:11:11.716 fused_ordering(189) 00:11:11.716 fused_ordering(190) 00:11:11.716 fused_ordering(191) 00:11:11.716 fused_ordering(192) 00:11:11.716 fused_ordering(193) 00:11:11.716 fused_ordering(194) 00:11:11.716 fused_ordering(195) 00:11:11.716 fused_ordering(196) 00:11:11.716 fused_ordering(197) 00:11:11.716 fused_ordering(198) 00:11:11.716 fused_ordering(199) 00:11:11.716 fused_ordering(200) 00:11:11.716 fused_ordering(201) 00:11:11.716 fused_ordering(202) 00:11:11.716 fused_ordering(203) 00:11:11.716 fused_ordering(204) 00:11:11.716 fused_ordering(205) 00:11:12.042 fused_ordering(206) 00:11:12.042 fused_ordering(207) 00:11:12.042 fused_ordering(208) 00:11:12.042 fused_ordering(209) 00:11:12.042 fused_ordering(210) 00:11:12.042 fused_ordering(211) 00:11:12.042 fused_ordering(212) 00:11:12.042 fused_ordering(213) 00:11:12.042 fused_ordering(214) 00:11:12.042 fused_ordering(215) 00:11:12.042 fused_ordering(216) 00:11:12.042 fused_ordering(217) 00:11:12.042 fused_ordering(218) 00:11:12.042 fused_ordering(219) 00:11:12.042 fused_ordering(220) 00:11:12.042 fused_ordering(221) 00:11:12.042 fused_ordering(222) 00:11:12.042 fused_ordering(223) 00:11:12.042 fused_ordering(224) 00:11:12.042 fused_ordering(225) 00:11:12.042 fused_ordering(226) 00:11:12.042 fused_ordering(227) 00:11:12.042 fused_ordering(228) 00:11:12.042 fused_ordering(229) 00:11:12.042 fused_ordering(230) 00:11:12.042 fused_ordering(231) 00:11:12.042 fused_ordering(232) 00:11:12.042 fused_ordering(233) 00:11:12.042 fused_ordering(234) 00:11:12.042 fused_ordering(235) 00:11:12.042 fused_ordering(236) 00:11:12.042 fused_ordering(237) 00:11:12.042 fused_ordering(238) 00:11:12.042 fused_ordering(239) 00:11:12.042 fused_ordering(240) 00:11:12.042 fused_ordering(241) 00:11:12.042 fused_ordering(242) 00:11:12.042 fused_ordering(243) 00:11:12.042 fused_ordering(244) 00:11:12.042 fused_ordering(245) 00:11:12.042 fused_ordering(246) 00:11:12.042 fused_ordering(247) 00:11:12.042 fused_ordering(248) 00:11:12.042 fused_ordering(249) 00:11:12.042 fused_ordering(250) 00:11:12.042 fused_ordering(251) 00:11:12.042 fused_ordering(252) 00:11:12.042 fused_ordering(253) 00:11:12.042 fused_ordering(254) 00:11:12.042 fused_ordering(255) 00:11:12.042 fused_ordering(256) 00:11:12.042 fused_ordering(257) 00:11:12.042 fused_ordering(258) 00:11:12.042 fused_ordering(259) 00:11:12.042 fused_ordering(260) 00:11:12.042 fused_ordering(261) 00:11:12.042 fused_ordering(262) 00:11:12.042 fused_ordering(263) 00:11:12.042 fused_ordering(264) 00:11:12.042 fused_ordering(265) 00:11:12.042 fused_ordering(266) 00:11:12.042 fused_ordering(267) 00:11:12.042 fused_ordering(268) 00:11:12.042 fused_ordering(269) 00:11:12.042 fused_ordering(270) 00:11:12.042 fused_ordering(271) 00:11:12.042 fused_ordering(272) 00:11:12.042 fused_ordering(273) 00:11:12.042 fused_ordering(274) 00:11:12.042 fused_ordering(275) 00:11:12.042 fused_ordering(276) 00:11:12.042 fused_ordering(277) 00:11:12.042 fused_ordering(278) 00:11:12.042 fused_ordering(279) 00:11:12.042 fused_ordering(280) 00:11:12.042 fused_ordering(281) 00:11:12.042 fused_ordering(282) 00:11:12.042 fused_ordering(283) 00:11:12.042 fused_ordering(284) 00:11:12.042 fused_ordering(285) 00:11:12.042 fused_ordering(286) 00:11:12.042 fused_ordering(287) 00:11:12.042 fused_ordering(288) 00:11:12.042 fused_ordering(289) 00:11:12.042 fused_ordering(290) 00:11:12.042 fused_ordering(291) 00:11:12.042 fused_ordering(292) 00:11:12.042 fused_ordering(293) 00:11:12.042 fused_ordering(294) 00:11:12.042 fused_ordering(295) 00:11:12.042 fused_ordering(296) 00:11:12.042 fused_ordering(297) 00:11:12.042 fused_ordering(298) 00:11:12.042 fused_ordering(299) 00:11:12.042 fused_ordering(300) 00:11:12.042 fused_ordering(301) 00:11:12.042 fused_ordering(302) 00:11:12.042 fused_ordering(303) 00:11:12.042 fused_ordering(304) 00:11:12.042 fused_ordering(305) 00:11:12.042 fused_ordering(306) 00:11:12.042 fused_ordering(307) 00:11:12.042 fused_ordering(308) 00:11:12.042 fused_ordering(309) 00:11:12.042 fused_ordering(310) 00:11:12.042 fused_ordering(311) 00:11:12.042 fused_ordering(312) 00:11:12.042 fused_ordering(313) 00:11:12.042 fused_ordering(314) 00:11:12.042 fused_ordering(315) 00:11:12.042 fused_ordering(316) 00:11:12.042 fused_ordering(317) 00:11:12.042 fused_ordering(318) 00:11:12.042 fused_ordering(319) 00:11:12.042 fused_ordering(320) 00:11:12.042 fused_ordering(321) 00:11:12.042 fused_ordering(322) 00:11:12.042 fused_ordering(323) 00:11:12.042 fused_ordering(324) 00:11:12.042 fused_ordering(325) 00:11:12.042 fused_ordering(326) 00:11:12.042 fused_ordering(327) 00:11:12.042 fused_ordering(328) 00:11:12.042 fused_ordering(329) 00:11:12.042 fused_ordering(330) 00:11:12.042 fused_ordering(331) 00:11:12.042 fused_ordering(332) 00:11:12.042 fused_ordering(333) 00:11:12.042 fused_ordering(334) 00:11:12.042 fused_ordering(335) 00:11:12.042 fused_ordering(336) 00:11:12.042 fused_ordering(337) 00:11:12.042 fused_ordering(338) 00:11:12.042 fused_ordering(339) 00:11:12.042 fused_ordering(340) 00:11:12.042 fused_ordering(341) 00:11:12.042 fused_ordering(342) 00:11:12.042 fused_ordering(343) 00:11:12.042 fused_ordering(344) 00:11:12.042 fused_ordering(345) 00:11:12.042 fused_ordering(346) 00:11:12.042 fused_ordering(347) 00:11:12.042 fused_ordering(348) 00:11:12.042 fused_ordering(349) 00:11:12.042 fused_ordering(350) 00:11:12.042 fused_ordering(351) 00:11:12.042 fused_ordering(352) 00:11:12.042 fused_ordering(353) 00:11:12.042 fused_ordering(354) 00:11:12.042 fused_ordering(355) 00:11:12.042 fused_ordering(356) 00:11:12.042 fused_ordering(357) 00:11:12.042 fused_ordering(358) 00:11:12.042 fused_ordering(359) 00:11:12.042 fused_ordering(360) 00:11:12.042 fused_ordering(361) 00:11:12.042 fused_ordering(362) 00:11:12.042 fused_ordering(363) 00:11:12.043 fused_ordering(364) 00:11:12.043 fused_ordering(365) 00:11:12.043 fused_ordering(366) 00:11:12.043 fused_ordering(367) 00:11:12.043 fused_ordering(368) 00:11:12.043 fused_ordering(369) 00:11:12.043 fused_ordering(370) 00:11:12.043 fused_ordering(371) 00:11:12.043 fused_ordering(372) 00:11:12.043 fused_ordering(373) 00:11:12.043 fused_ordering(374) 00:11:12.043 fused_ordering(375) 00:11:12.043 fused_ordering(376) 00:11:12.043 fused_ordering(377) 00:11:12.043 fused_ordering(378) 00:11:12.043 fused_ordering(379) 00:11:12.043 fused_ordering(380) 00:11:12.043 fused_ordering(381) 00:11:12.043 fused_ordering(382) 00:11:12.043 fused_ordering(383) 00:11:12.043 fused_ordering(384) 00:11:12.043 fused_ordering(385) 00:11:12.043 fused_ordering(386) 00:11:12.043 fused_ordering(387) 00:11:12.043 fused_ordering(388) 00:11:12.043 fused_ordering(389) 00:11:12.043 fused_ordering(390) 00:11:12.043 fused_ordering(391) 00:11:12.043 fused_ordering(392) 00:11:12.043 fused_ordering(393) 00:11:12.043 fused_ordering(394) 00:11:12.043 fused_ordering(395) 00:11:12.043 fused_ordering(396) 00:11:12.043 fused_ordering(397) 00:11:12.043 fused_ordering(398) 00:11:12.043 fused_ordering(399) 00:11:12.043 fused_ordering(400) 00:11:12.043 fused_ordering(401) 00:11:12.043 fused_ordering(402) 00:11:12.043 fused_ordering(403) 00:11:12.043 fused_ordering(404) 00:11:12.043 fused_ordering(405) 00:11:12.043 fused_ordering(406) 00:11:12.043 fused_ordering(407) 00:11:12.043 fused_ordering(408) 00:11:12.043 fused_ordering(409) 00:11:12.043 fused_ordering(410) 00:11:12.616 fused_ordering(411) 00:11:12.616 fused_ordering(412) 00:11:12.616 fused_ordering(413) 00:11:12.616 fused_ordering(414) 00:11:12.616 fused_ordering(415) 00:11:12.616 fused_ordering(416) 00:11:12.616 fused_ordering(417) 00:11:12.616 fused_ordering(418) 00:11:12.616 fused_ordering(419) 00:11:12.616 fused_ordering(420) 00:11:12.616 fused_ordering(421) 00:11:12.616 fused_ordering(422) 00:11:12.616 fused_ordering(423) 00:11:12.616 fused_ordering(424) 00:11:12.616 fused_ordering(425) 00:11:12.616 fused_ordering(426) 00:11:12.616 fused_ordering(427) 00:11:12.616 fused_ordering(428) 00:11:12.616 fused_ordering(429) 00:11:12.616 fused_ordering(430) 00:11:12.616 fused_ordering(431) 00:11:12.616 fused_ordering(432) 00:11:12.616 fused_ordering(433) 00:11:12.616 fused_ordering(434) 00:11:12.616 fused_ordering(435) 00:11:12.616 fused_ordering(436) 00:11:12.616 fused_ordering(437) 00:11:12.616 fused_ordering(438) 00:11:12.616 fused_ordering(439) 00:11:12.616 fused_ordering(440) 00:11:12.616 fused_ordering(441) 00:11:12.616 fused_ordering(442) 00:11:12.616 fused_ordering(443) 00:11:12.616 fused_ordering(444) 00:11:12.616 fused_ordering(445) 00:11:12.616 fused_ordering(446) 00:11:12.616 fused_ordering(447) 00:11:12.616 fused_ordering(448) 00:11:12.616 fused_ordering(449) 00:11:12.616 fused_ordering(450) 00:11:12.616 fused_ordering(451) 00:11:12.616 fused_ordering(452) 00:11:12.616 fused_ordering(453) 00:11:12.616 fused_ordering(454) 00:11:12.616 fused_ordering(455) 00:11:12.616 fused_ordering(456) 00:11:12.616 fused_ordering(457) 00:11:12.616 fused_ordering(458) 00:11:12.616 fused_ordering(459) 00:11:12.616 fused_ordering(460) 00:11:12.616 fused_ordering(461) 00:11:12.616 fused_ordering(462) 00:11:12.616 fused_ordering(463) 00:11:12.616 fused_ordering(464) 00:11:12.616 fused_ordering(465) 00:11:12.616 fused_ordering(466) 00:11:12.616 fused_ordering(467) 00:11:12.616 fused_ordering(468) 00:11:12.616 fused_ordering(469) 00:11:12.616 fused_ordering(470) 00:11:12.616 fused_ordering(471) 00:11:12.616 fused_ordering(472) 00:11:12.616 fused_ordering(473) 00:11:12.616 fused_ordering(474) 00:11:12.616 fused_ordering(475) 00:11:12.616 fused_ordering(476) 00:11:12.616 fused_ordering(477) 00:11:12.616 fused_ordering(478) 00:11:12.616 fused_ordering(479) 00:11:12.616 fused_ordering(480) 00:11:12.616 fused_ordering(481) 00:11:12.616 fused_ordering(482) 00:11:12.616 fused_ordering(483) 00:11:12.616 fused_ordering(484) 00:11:12.616 fused_ordering(485) 00:11:12.616 fused_ordering(486) 00:11:12.616 fused_ordering(487) 00:11:12.616 fused_ordering(488) 00:11:12.616 fused_ordering(489) 00:11:12.616 fused_ordering(490) 00:11:12.616 fused_ordering(491) 00:11:12.616 fused_ordering(492) 00:11:12.616 fused_ordering(493) 00:11:12.616 fused_ordering(494) 00:11:12.616 fused_ordering(495) 00:11:12.616 fused_ordering(496) 00:11:12.616 fused_ordering(497) 00:11:12.616 fused_ordering(498) 00:11:12.616 fused_ordering(499) 00:11:12.616 fused_ordering(500) 00:11:12.616 fused_ordering(501) 00:11:12.616 fused_ordering(502) 00:11:12.616 fused_ordering(503) 00:11:12.616 fused_ordering(504) 00:11:12.616 fused_ordering(505) 00:11:12.616 fused_ordering(506) 00:11:12.616 fused_ordering(507) 00:11:12.616 fused_ordering(508) 00:11:12.616 fused_ordering(509) 00:11:12.616 fused_ordering(510) 00:11:12.616 fused_ordering(511) 00:11:12.616 fused_ordering(512) 00:11:12.616 fused_ordering(513) 00:11:12.616 fused_ordering(514) 00:11:12.616 fused_ordering(515) 00:11:12.616 fused_ordering(516) 00:11:12.616 fused_ordering(517) 00:11:12.616 fused_ordering(518) 00:11:12.616 fused_ordering(519) 00:11:12.616 fused_ordering(520) 00:11:12.616 fused_ordering(521) 00:11:12.616 fused_ordering(522) 00:11:12.616 fused_ordering(523) 00:11:12.616 fused_ordering(524) 00:11:12.616 fused_ordering(525) 00:11:12.616 fused_ordering(526) 00:11:12.616 fused_ordering(527) 00:11:12.616 fused_ordering(528) 00:11:12.616 fused_ordering(529) 00:11:12.616 fused_ordering(530) 00:11:12.616 fused_ordering(531) 00:11:12.616 fused_ordering(532) 00:11:12.616 fused_ordering(533) 00:11:12.616 fused_ordering(534) 00:11:12.616 fused_ordering(535) 00:11:12.616 fused_ordering(536) 00:11:12.616 fused_ordering(537) 00:11:12.616 fused_ordering(538) 00:11:12.616 fused_ordering(539) 00:11:12.616 fused_ordering(540) 00:11:12.616 fused_ordering(541) 00:11:12.616 fused_ordering(542) 00:11:12.616 fused_ordering(543) 00:11:12.616 fused_ordering(544) 00:11:12.616 fused_ordering(545) 00:11:12.616 fused_ordering(546) 00:11:12.616 fused_ordering(547) 00:11:12.616 fused_ordering(548) 00:11:12.616 fused_ordering(549) 00:11:12.616 fused_ordering(550) 00:11:12.616 fused_ordering(551) 00:11:12.616 fused_ordering(552) 00:11:12.616 fused_ordering(553) 00:11:12.616 fused_ordering(554) 00:11:12.616 fused_ordering(555) 00:11:12.616 fused_ordering(556) 00:11:12.616 fused_ordering(557) 00:11:12.616 fused_ordering(558) 00:11:12.616 fused_ordering(559) 00:11:12.616 fused_ordering(560) 00:11:12.616 fused_ordering(561) 00:11:12.616 fused_ordering(562) 00:11:12.616 fused_ordering(563) 00:11:12.616 fused_ordering(564) 00:11:12.616 fused_ordering(565) 00:11:12.616 fused_ordering(566) 00:11:12.616 fused_ordering(567) 00:11:12.616 fused_ordering(568) 00:11:12.616 fused_ordering(569) 00:11:12.616 fused_ordering(570) 00:11:12.616 fused_ordering(571) 00:11:12.616 fused_ordering(572) 00:11:12.616 fused_ordering(573) 00:11:12.616 fused_ordering(574) 00:11:12.616 fused_ordering(575) 00:11:12.617 fused_ordering(576) 00:11:12.617 fused_ordering(577) 00:11:12.617 fused_ordering(578) 00:11:12.617 fused_ordering(579) 00:11:12.617 fused_ordering(580) 00:11:12.617 fused_ordering(581) 00:11:12.617 fused_ordering(582) 00:11:12.617 fused_ordering(583) 00:11:12.617 fused_ordering(584) 00:11:12.617 fused_ordering(585) 00:11:12.617 fused_ordering(586) 00:11:12.617 fused_ordering(587) 00:11:12.617 fused_ordering(588) 00:11:12.617 fused_ordering(589) 00:11:12.617 fused_ordering(590) 00:11:12.617 fused_ordering(591) 00:11:12.617 fused_ordering(592) 00:11:12.617 fused_ordering(593) 00:11:12.617 fused_ordering(594) 00:11:12.617 fused_ordering(595) 00:11:12.617 fused_ordering(596) 00:11:12.617 fused_ordering(597) 00:11:12.617 fused_ordering(598) 00:11:12.617 fused_ordering(599) 00:11:12.617 fused_ordering(600) 00:11:12.617 fused_ordering(601) 00:11:12.617 fused_ordering(602) 00:11:12.617 fused_ordering(603) 00:11:12.617 fused_ordering(604) 00:11:12.617 fused_ordering(605) 00:11:12.617 fused_ordering(606) 00:11:12.617 fused_ordering(607) 00:11:12.617 fused_ordering(608) 00:11:12.617 fused_ordering(609) 00:11:12.617 fused_ordering(610) 00:11:12.617 fused_ordering(611) 00:11:12.617 fused_ordering(612) 00:11:12.617 fused_ordering(613) 00:11:12.617 fused_ordering(614) 00:11:12.617 fused_ordering(615) 00:11:13.188 fused_ordering(616) 00:11:13.188 fused_ordering(617) 00:11:13.188 fused_ordering(618) 00:11:13.188 fused_ordering(619) 00:11:13.188 fused_ordering(620) 00:11:13.188 fused_ordering(621) 00:11:13.188 fused_ordering(622) 00:11:13.188 fused_ordering(623) 00:11:13.188 fused_ordering(624) 00:11:13.188 fused_ordering(625) 00:11:13.188 fused_ordering(626) 00:11:13.188 fused_ordering(627) 00:11:13.188 fused_ordering(628) 00:11:13.188 fused_ordering(629) 00:11:13.188 fused_ordering(630) 00:11:13.188 fused_ordering(631) 00:11:13.188 fused_ordering(632) 00:11:13.188 fused_ordering(633) 00:11:13.188 fused_ordering(634) 00:11:13.188 fused_ordering(635) 00:11:13.188 fused_ordering(636) 00:11:13.188 fused_ordering(637) 00:11:13.188 fused_ordering(638) 00:11:13.188 fused_ordering(639) 00:11:13.188 fused_ordering(640) 00:11:13.188 fused_ordering(641) 00:11:13.188 fused_ordering(642) 00:11:13.188 fused_ordering(643) 00:11:13.188 fused_ordering(644) 00:11:13.188 fused_ordering(645) 00:11:13.188 fused_ordering(646) 00:11:13.188 fused_ordering(647) 00:11:13.188 fused_ordering(648) 00:11:13.188 fused_ordering(649) 00:11:13.188 fused_ordering(650) 00:11:13.188 fused_ordering(651) 00:11:13.188 fused_ordering(652) 00:11:13.188 fused_ordering(653) 00:11:13.188 fused_ordering(654) 00:11:13.188 fused_ordering(655) 00:11:13.188 fused_ordering(656) 00:11:13.188 fused_ordering(657) 00:11:13.188 fused_ordering(658) 00:11:13.188 fused_ordering(659) 00:11:13.188 fused_ordering(660) 00:11:13.188 fused_ordering(661) 00:11:13.188 fused_ordering(662) 00:11:13.188 fused_ordering(663) 00:11:13.188 fused_ordering(664) 00:11:13.188 fused_ordering(665) 00:11:13.188 fused_ordering(666) 00:11:13.188 fused_ordering(667) 00:11:13.188 fused_ordering(668) 00:11:13.188 fused_ordering(669) 00:11:13.188 fused_ordering(670) 00:11:13.188 fused_ordering(671) 00:11:13.188 fused_ordering(672) 00:11:13.188 fused_ordering(673) 00:11:13.188 fused_ordering(674) 00:11:13.188 fused_ordering(675) 00:11:13.188 fused_ordering(676) 00:11:13.188 fused_ordering(677) 00:11:13.188 fused_ordering(678) 00:11:13.188 fused_ordering(679) 00:11:13.188 fused_ordering(680) 00:11:13.188 fused_ordering(681) 00:11:13.188 fused_ordering(682) 00:11:13.188 fused_ordering(683) 00:11:13.188 fused_ordering(684) 00:11:13.188 fused_ordering(685) 00:11:13.188 fused_ordering(686) 00:11:13.188 fused_ordering(687) 00:11:13.188 fused_ordering(688) 00:11:13.189 fused_ordering(689) 00:11:13.189 fused_ordering(690) 00:11:13.189 fused_ordering(691) 00:11:13.189 fused_ordering(692) 00:11:13.189 fused_ordering(693) 00:11:13.189 fused_ordering(694) 00:11:13.189 fused_ordering(695) 00:11:13.189 fused_ordering(696) 00:11:13.189 fused_ordering(697) 00:11:13.189 fused_ordering(698) 00:11:13.189 fused_ordering(699) 00:11:13.189 fused_ordering(700) 00:11:13.189 fused_ordering(701) 00:11:13.189 fused_ordering(702) 00:11:13.189 fused_ordering(703) 00:11:13.189 fused_ordering(704) 00:11:13.189 fused_ordering(705) 00:11:13.189 fused_ordering(706) 00:11:13.189 fused_ordering(707) 00:11:13.189 fused_ordering(708) 00:11:13.189 fused_ordering(709) 00:11:13.189 fused_ordering(710) 00:11:13.189 fused_ordering(711) 00:11:13.189 fused_ordering(712) 00:11:13.189 fused_ordering(713) 00:11:13.189 fused_ordering(714) 00:11:13.189 fused_ordering(715) 00:11:13.189 fused_ordering(716) 00:11:13.189 fused_ordering(717) 00:11:13.189 fused_ordering(718) 00:11:13.189 fused_ordering(719) 00:11:13.189 fused_ordering(720) 00:11:13.189 fused_ordering(721) 00:11:13.189 fused_ordering(722) 00:11:13.189 fused_ordering(723) 00:11:13.189 fused_ordering(724) 00:11:13.189 fused_ordering(725) 00:11:13.189 fused_ordering(726) 00:11:13.189 fused_ordering(727) 00:11:13.189 fused_ordering(728) 00:11:13.189 fused_ordering(729) 00:11:13.189 fused_ordering(730) 00:11:13.189 fused_ordering(731) 00:11:13.189 fused_ordering(732) 00:11:13.189 fused_ordering(733) 00:11:13.189 fused_ordering(734) 00:11:13.189 fused_ordering(735) 00:11:13.189 fused_ordering(736) 00:11:13.189 fused_ordering(737) 00:11:13.189 fused_ordering(738) 00:11:13.189 fused_ordering(739) 00:11:13.189 fused_ordering(740) 00:11:13.189 fused_ordering(741) 00:11:13.189 fused_ordering(742) 00:11:13.189 fused_ordering(743) 00:11:13.189 fused_ordering(744) 00:11:13.189 fused_ordering(745) 00:11:13.189 fused_ordering(746) 00:11:13.189 fused_ordering(747) 00:11:13.189 fused_ordering(748) 00:11:13.189 fused_ordering(749) 00:11:13.189 fused_ordering(750) 00:11:13.189 fused_ordering(751) 00:11:13.189 fused_ordering(752) 00:11:13.189 fused_ordering(753) 00:11:13.189 fused_ordering(754) 00:11:13.189 fused_ordering(755) 00:11:13.189 fused_ordering(756) 00:11:13.189 fused_ordering(757) 00:11:13.189 fused_ordering(758) 00:11:13.189 fused_ordering(759) 00:11:13.189 fused_ordering(760) 00:11:13.189 fused_ordering(761) 00:11:13.189 fused_ordering(762) 00:11:13.189 fused_ordering(763) 00:11:13.189 fused_ordering(764) 00:11:13.189 fused_ordering(765) 00:11:13.189 fused_ordering(766) 00:11:13.189 fused_ordering(767) 00:11:13.189 fused_ordering(768) 00:11:13.189 fused_ordering(769) 00:11:13.189 fused_ordering(770) 00:11:13.189 fused_ordering(771) 00:11:13.189 fused_ordering(772) 00:11:13.189 fused_ordering(773) 00:11:13.189 fused_ordering(774) 00:11:13.189 fused_ordering(775) 00:11:13.189 fused_ordering(776) 00:11:13.189 fused_ordering(777) 00:11:13.189 fused_ordering(778) 00:11:13.189 fused_ordering(779) 00:11:13.189 fused_ordering(780) 00:11:13.189 fused_ordering(781) 00:11:13.189 fused_ordering(782) 00:11:13.189 fused_ordering(783) 00:11:13.189 fused_ordering(784) 00:11:13.189 fused_ordering(785) 00:11:13.189 fused_ordering(786) 00:11:13.189 fused_ordering(787) 00:11:13.189 fused_ordering(788) 00:11:13.189 fused_ordering(789) 00:11:13.189 fused_ordering(790) 00:11:13.189 fused_ordering(791) 00:11:13.189 fused_ordering(792) 00:11:13.189 fused_ordering(793) 00:11:13.189 fused_ordering(794) 00:11:13.189 fused_ordering(795) 00:11:13.189 fused_ordering(796) 00:11:13.189 fused_ordering(797) 00:11:13.189 fused_ordering(798) 00:11:13.189 fused_ordering(799) 00:11:13.189 fused_ordering(800) 00:11:13.189 fused_ordering(801) 00:11:13.189 fused_ordering(802) 00:11:13.189 fused_ordering(803) 00:11:13.189 fused_ordering(804) 00:11:13.189 fused_ordering(805) 00:11:13.189 fused_ordering(806) 00:11:13.189 fused_ordering(807) 00:11:13.189 fused_ordering(808) 00:11:13.189 fused_ordering(809) 00:11:13.189 fused_ordering(810) 00:11:13.189 fused_ordering(811) 00:11:13.189 fused_ordering(812) 00:11:13.189 fused_ordering(813) 00:11:13.189 fused_ordering(814) 00:11:13.189 fused_ordering(815) 00:11:13.189 fused_ordering(816) 00:11:13.189 fused_ordering(817) 00:11:13.189 fused_ordering(818) 00:11:13.189 fused_ordering(819) 00:11:13.189 fused_ordering(820) 00:11:13.762 fused_ordering(821) 00:11:13.762 fused_ordering(822) 00:11:13.762 fused_ordering(823) 00:11:13.762 fused_ordering(824) 00:11:13.762 fused_ordering(825) 00:11:13.762 fused_ordering(826) 00:11:13.762 fused_ordering(827) 00:11:13.762 fused_ordering(828) 00:11:13.762 fused_ordering(829) 00:11:13.762 fused_ordering(830) 00:11:13.762 fused_ordering(831) 00:11:13.762 fused_ordering(832) 00:11:13.762 fused_ordering(833) 00:11:13.762 fused_ordering(834) 00:11:13.762 fused_ordering(835) 00:11:13.762 fused_ordering(836) 00:11:13.762 fused_ordering(837) 00:11:13.762 fused_ordering(838) 00:11:13.762 fused_ordering(839) 00:11:13.762 fused_ordering(840) 00:11:13.762 fused_ordering(841) 00:11:13.762 fused_ordering(842) 00:11:13.762 fused_ordering(843) 00:11:13.762 fused_ordering(844) 00:11:13.762 fused_ordering(845) 00:11:13.762 fused_ordering(846) 00:11:13.762 fused_ordering(847) 00:11:13.762 fused_ordering(848) 00:11:13.762 fused_ordering(849) 00:11:13.762 fused_ordering(850) 00:11:13.762 fused_ordering(851) 00:11:13.762 fused_ordering(852) 00:11:13.762 fused_ordering(853) 00:11:13.762 fused_ordering(854) 00:11:13.762 fused_ordering(855) 00:11:13.762 fused_ordering(856) 00:11:13.762 fused_ordering(857) 00:11:13.762 fused_ordering(858) 00:11:13.762 fused_ordering(859) 00:11:13.762 fused_ordering(860) 00:11:13.762 fused_ordering(861) 00:11:13.762 fused_ordering(862) 00:11:13.762 fused_ordering(863) 00:11:13.762 fused_ordering(864) 00:11:13.762 fused_ordering(865) 00:11:13.762 fused_ordering(866) 00:11:13.762 fused_ordering(867) 00:11:13.762 fused_ordering(868) 00:11:13.762 fused_ordering(869) 00:11:13.762 fused_ordering(870) 00:11:13.762 fused_ordering(871) 00:11:13.762 fused_ordering(872) 00:11:13.762 fused_ordering(873) 00:11:13.762 fused_ordering(874) 00:11:13.762 fused_ordering(875) 00:11:13.762 fused_ordering(876) 00:11:13.762 fused_ordering(877) 00:11:13.762 fused_ordering(878) 00:11:13.762 fused_ordering(879) 00:11:13.762 fused_ordering(880) 00:11:13.762 fused_ordering(881) 00:11:13.762 fused_ordering(882) 00:11:13.762 fused_ordering(883) 00:11:13.762 fused_ordering(884) 00:11:13.762 fused_ordering(885) 00:11:13.762 fused_ordering(886) 00:11:13.762 fused_ordering(887) 00:11:13.762 fused_ordering(888) 00:11:13.762 fused_ordering(889) 00:11:13.762 fused_ordering(890) 00:11:13.762 fused_ordering(891) 00:11:13.762 fused_ordering(892) 00:11:13.762 fused_ordering(893) 00:11:13.762 fused_ordering(894) 00:11:13.762 fused_ordering(895) 00:11:13.762 fused_ordering(896) 00:11:13.762 fused_ordering(897) 00:11:13.762 fused_ordering(898) 00:11:13.762 fused_ordering(899) 00:11:13.762 fused_ordering(900) 00:11:13.762 fused_ordering(901) 00:11:13.762 fused_ordering(902) 00:11:13.762 fused_ordering(903) 00:11:13.762 fused_ordering(904) 00:11:13.762 fused_ordering(905) 00:11:13.762 fused_ordering(906) 00:11:13.762 fused_ordering(907) 00:11:13.762 fused_ordering(908) 00:11:13.762 fused_ordering(909) 00:11:13.762 fused_ordering(910) 00:11:13.762 fused_ordering(911) 00:11:13.762 fused_ordering(912) 00:11:13.762 fused_ordering(913) 00:11:13.762 fused_ordering(914) 00:11:13.762 fused_ordering(915) 00:11:13.762 fused_ordering(916) 00:11:13.762 fused_ordering(917) 00:11:13.762 fused_ordering(918) 00:11:13.762 fused_ordering(919) 00:11:13.762 fused_ordering(920) 00:11:13.762 fused_ordering(921) 00:11:13.762 fused_ordering(922) 00:11:13.763 fused_ordering(923) 00:11:13.763 fused_ordering(924) 00:11:13.763 fused_ordering(925) 00:11:13.763 fused_ordering(926) 00:11:13.763 fused_ordering(927) 00:11:13.763 fused_ordering(928) 00:11:13.763 fused_ordering(929) 00:11:13.763 fused_ordering(930) 00:11:13.763 fused_ordering(931) 00:11:13.763 fused_ordering(932) 00:11:13.763 fused_ordering(933) 00:11:13.763 fused_ordering(934) 00:11:13.763 fused_ordering(935) 00:11:13.763 fused_ordering(936) 00:11:13.763 fused_ordering(937) 00:11:13.763 fused_ordering(938) 00:11:13.763 fused_ordering(939) 00:11:13.763 fused_ordering(940) 00:11:13.763 fused_ordering(941) 00:11:13.763 fused_ordering(942) 00:11:13.763 fused_ordering(943) 00:11:13.763 fused_ordering(944) 00:11:13.763 fused_ordering(945) 00:11:13.763 fused_ordering(946) 00:11:13.763 fused_ordering(947) 00:11:13.763 fused_ordering(948) 00:11:13.763 fused_ordering(949) 00:11:13.763 fused_ordering(950) 00:11:13.763 fused_ordering(951) 00:11:13.763 fused_ordering(952) 00:11:13.763 fused_ordering(953) 00:11:13.763 fused_ordering(954) 00:11:13.763 fused_ordering(955) 00:11:13.763 fused_ordering(956) 00:11:13.763 fused_ordering(957) 00:11:13.763 fused_ordering(958) 00:11:13.763 fused_ordering(959) 00:11:13.763 fused_ordering(960) 00:11:13.763 fused_ordering(961) 00:11:13.763 fused_ordering(962) 00:11:13.763 fused_ordering(963) 00:11:13.763 fused_ordering(964) 00:11:13.763 fused_ordering(965) 00:11:13.763 fused_ordering(966) 00:11:13.763 fused_ordering(967) 00:11:13.763 fused_ordering(968) 00:11:13.763 fused_ordering(969) 00:11:13.763 fused_ordering(970) 00:11:13.763 fused_ordering(971) 00:11:13.763 fused_ordering(972) 00:11:13.763 fused_ordering(973) 00:11:13.763 fused_ordering(974) 00:11:13.763 fused_ordering(975) 00:11:13.763 fused_ordering(976) 00:11:13.763 fused_ordering(977) 00:11:13.763 fused_ordering(978) 00:11:13.763 fused_ordering(979) 00:11:13.763 fused_ordering(980) 00:11:13.763 fused_ordering(981) 00:11:13.763 fused_ordering(982) 00:11:13.763 fused_ordering(983) 00:11:13.763 fused_ordering(984) 00:11:13.763 fused_ordering(985) 00:11:13.763 fused_ordering(986) 00:11:13.763 fused_ordering(987) 00:11:13.763 fused_ordering(988) 00:11:13.763 fused_ordering(989) 00:11:13.763 fused_ordering(990) 00:11:13.763 fused_ordering(991) 00:11:13.763 fused_ordering(992) 00:11:13.763 fused_ordering(993) 00:11:13.763 fused_ordering(994) 00:11:13.763 fused_ordering(995) 00:11:13.763 fused_ordering(996) 00:11:13.763 fused_ordering(997) 00:11:13.763 fused_ordering(998) 00:11:13.763 fused_ordering(999) 00:11:13.763 fused_ordering(1000) 00:11:13.763 fused_ordering(1001) 00:11:13.763 fused_ordering(1002) 00:11:13.763 fused_ordering(1003) 00:11:13.763 fused_ordering(1004) 00:11:13.763 fused_ordering(1005) 00:11:13.763 fused_ordering(1006) 00:11:13.763 fused_ordering(1007) 00:11:13.763 fused_ordering(1008) 00:11:13.763 fused_ordering(1009) 00:11:13.763 fused_ordering(1010) 00:11:13.763 fused_ordering(1011) 00:11:13.763 fused_ordering(1012) 00:11:13.763 fused_ordering(1013) 00:11:13.763 fused_ordering(1014) 00:11:13.763 fused_ordering(1015) 00:11:13.763 fused_ordering(1016) 00:11:13.763 fused_ordering(1017) 00:11:13.763 fused_ordering(1018) 00:11:13.763 fused_ordering(1019) 00:11:13.763 fused_ordering(1020) 00:11:13.763 fused_ordering(1021) 00:11:13.763 fused_ordering(1022) 00:11:13.763 fused_ordering(1023) 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:13.763 rmmod nvme_tcp 00:11:13.763 rmmod nvme_fabrics 00:11:13.763 rmmod nvme_keyring 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 530642 ']' 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 530642 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 530642 ']' 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 530642 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 530642 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 530642' 00:11:13.763 killing process with pid 530642 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 530642 00:11:13.763 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 530642 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.024 12:15:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.939 12:15:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.939 00:11:15.939 real 0m13.938s 00:11:15.939 user 0m7.152s 00:11:15.939 sys 0m7.462s 00:11:15.939 12:15:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:15.939 12:15:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.939 ************************************ 00:11:15.939 END TEST nvmf_fused_ordering 00:11:15.939 ************************************ 00:11:15.939 12:15:21 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:15.939 12:15:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:15.939 12:15:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:15.939 12:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 ************************************ 00:11:16.200 START TEST nvmf_delete_subsystem 00:11:16.200 ************************************ 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:16.200 * Looking for test storage... 00:11:16.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.200 12:15:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.345 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:24.346 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:24.346 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:24.346 Found net devices under 0000:31:00.0: cvl_0_0 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:24.346 Found net devices under 0000:31:00.1: cvl_0_1 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:11:24.346 00:11:24.346 --- 10.0.0.2 ping statistics --- 00:11:24.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.346 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:11:24.346 00:11:24.346 --- 10.0.0.1 ping statistics --- 00:11:24.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.346 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=536081 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 536081 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 536081 ']' 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:24.346 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.347 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:24.347 12:15:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.607 [2024-06-10 12:15:29.986980] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:24.607 [2024-06-10 12:15:29.987045] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.607 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.607 [2024-06-10 12:15:30.072706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.607 [2024-06-10 12:15:30.149336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.607 [2024-06-10 12:15:30.149380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.607 [2024-06-10 12:15:30.149388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.607 [2024-06-10 12:15:30.149394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.607 [2024-06-10 12:15:30.149400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.607 [2024-06-10 12:15:30.149484] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.607 [2024-06-10 12:15:30.149664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.177 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:25.177 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:11:25.177 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.177 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:25.177 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 [2024-06-10 12:15:30.801433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 [2024-06-10 12:15:30.825588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 NULL1 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 Delay0 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=536230 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:25.439 12:15:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:25.439 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.439 [2024-06-10 12:15:30.922280] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:27.354 12:15:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.355 12:15:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.355 12:15:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Write completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.616 starting I/O failed: -6 00:11:27.616 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 [2024-06-10 12:15:33.046086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaece90 is same with the state(5) to be set 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 starting I/O failed: -6 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 [2024-06-10 12:15:33.051640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa868000c00 is same with the state(5) to be set 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Write completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:27.617 Read completed with error (sct=0, sc=8) 00:11:28.560 [2024-06-10 12:15:34.022325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb500 is same with the state(5) to be set 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 [2024-06-10 12:15:34.049482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeccb0 is same with the state(5) to be set 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 [2024-06-10 12:15:34.049818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaebd00 is same with the state(5) to be set 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 [2024-06-10 12:15:34.054170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa86800c780 is same with the state(5) to be set 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Write completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 Read completed with error (sct=0, sc=8) 00:11:28.560 [2024-06-10 12:15:34.054263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa86800bfe0 is same with the state(5) to be set 00:11:28.560 Initializing NVMe Controllers 00:11:28.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:28.560 Controller IO queue size 128, less than required. 00:11:28.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:28.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:28.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:28.560 Initialization complete. Launching workers. 00:11:28.560 ======================================================== 00:11:28.560 Latency(us) 00:11:28.560 Device Information : IOPS MiB/s Average min max 00:11:28.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.40 0.08 896339.05 212.26 1005634.87 00:11:28.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.43 0.08 941823.66 280.56 2002298.70 00:11:28.560 ======================================================== 00:11:28.560 Total : 326.83 0.16 918387.99 212.26 2002298.70 00:11:28.560 00:11:28.560 [2024-06-10 12:15:34.054757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacb500 (9): Bad file descriptor 00:11:28.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:28.560 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:28.560 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:28.560 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 536230 00:11:28.560 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 536230 00:11:29.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (536230) - No such process 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 536230 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 536230 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 536230 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:29.132 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 [2024-06-10 12:15:34.586522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=536917 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:29.133 12:15:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.133 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.133 [2024-06-10 12:15:34.662890] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:29.704 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.704 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:29.704 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.275 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.275 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:30.275 12:15:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.535 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.535 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:30.535 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.105 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.105 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:31.105 12:15:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.815 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.815 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:31.815 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.076 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.076 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:32.076 12:15:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.336 Initializing NVMe Controllers 00:11:32.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.336 Controller IO queue size 128, less than required. 00:11:32.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:32.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:32.336 Initialization complete. Launching workers. 00:11:32.336 ======================================================== 00:11:32.336 Latency(us) 00:11:32.336 Device Information : IOPS MiB/s Average min max 00:11:32.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001849.61 1000086.84 1007428.39 00:11:32.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002532.28 1000147.17 1009239.64 00:11:32.336 ======================================================== 00:11:32.336 Total : 256.00 0.12 1002190.95 1000086.84 1009239.64 00:11:32.336 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 536917 00:11:32.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (536917) - No such process 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 536917 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.597 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.597 rmmod nvme_tcp 00:11:32.597 rmmod nvme_fabrics 00:11:32.597 rmmod nvme_keyring 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 536081 ']' 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 536081 ']' 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 536081' 00:11:32.858 killing process with pid 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 536081 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.858 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.859 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.859 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.859 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.859 12:15:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.402 12:15:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.402 00:11:35.402 real 0m18.921s 00:11:35.402 user 0m31.013s 00:11:35.402 sys 0m6.960s 00:11:35.402 12:15:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:35.402 12:15:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.402 ************************************ 00:11:35.402 END TEST nvmf_delete_subsystem 00:11:35.402 ************************************ 00:11:35.402 12:15:40 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.402 12:15:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:35.402 12:15:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:35.402 12:15:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.402 ************************************ 00:11:35.402 START TEST nvmf_ns_masking 00:11:35.402 ************************************ 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.402 * Looking for test storage... 00:11:35.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=1a084898-05b7-461e-9e6b-f28901ae6247 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.402 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.403 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.403 12:15:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.403 12:15:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:43.541 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:43.541 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.541 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:43.542 Found net devices under 0000:31:00.0: cvl_0_0 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:43.542 Found net devices under 0000:31:00.1: cvl_0_1 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:11:43.542 00:11:43.542 --- 10.0.0.2 ping statistics --- 00:11:43.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.542 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:43.542 00:11:43.542 --- 10.0.0.1 ping statistics --- 00:11:43.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.542 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=542486 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 542486 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 542486 ']' 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:43.542 12:15:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.542 [2024-06-10 12:15:48.990817] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:43.542 [2024-06-10 12:15:48.990884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.542 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.542 [2024-06-10 12:15:49.069423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.542 [2024-06-10 12:15:49.144150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.542 [2024-06-10 12:15:49.144183] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.542 [2024-06-10 12:15:49.144192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.542 [2024-06-10 12:15:49.144204] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.542 [2024-06-10 12:15:49.144211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.542 [2024-06-10 12:15:49.144291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.542 [2024-06-10 12:15:49.144417] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.542 [2024-06-10 12:15:49.144566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.542 [2024-06-10 12:15:49.144568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.483 [2024-06-10 12:15:49.935158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:44.483 12:15:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.743 Malloc1 00:11:44.743 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.743 Malloc2 00:11:44.743 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.003 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:45.263 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.263 [2024-06-10 12:15:50.797657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.263 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:45.264 12:15:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a084898-05b7-461e-9e6b-f28901ae6247 -a 10.0.0.2 -s 4420 -i 4 00:11:45.524 12:15:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.524 12:15:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:45.524 12:15:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.524 12:15:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:45.524 12:15:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:47.435 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.696 [ 0]:0x1 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8971bb719164df395e3db773dfb3ac7 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8971bb719164df395e3db773dfb3ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.696 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.956 [ 0]:0x1 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8971bb719164df395e3db773dfb3ac7 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8971bb719164df395e3db773dfb3ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.956 [ 1]:0x2 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.956 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.216 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:48.476 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:48.476 12:15:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a084898-05b7-461e-9e6b-f28901ae6247 -a 10.0.0.2 -s 4420 -i 4 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:11:48.476 12:15:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.020 [ 0]:0x2 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.020 [ 0]:0x1 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8971bb719164df395e3db773dfb3ac7 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8971bb719164df395e3db773dfb3ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.020 [ 1]:0x2 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.020 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.281 [ 0]:0x2 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.281 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.542 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:51.542 12:15:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a084898-05b7-461e-9e6b-f28901ae6247 -a 10.0.0.2 -s 4420 -i 4 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:11:51.542 12:15:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.092 [ 0]:0x1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8971bb719164df395e3db773dfb3ac7 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8971bb719164df395e3db773dfb3ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.092 [ 1]:0x2 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.092 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.093 [ 0]:0x2 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.093 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.093 [2024-06-10 12:15:59.697204] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:54.354 request: 00:11:54.354 { 00:11:54.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.354 "nsid": 2, 00:11:54.354 "host": "nqn.2016-06.io.spdk:host1", 00:11:54.354 "method": "nvmf_ns_remove_host", 00:11:54.354 "req_id": 1 00:11:54.354 } 00:11:54.354 Got JSON-RPC error response 00:11:54.354 response: 00:11:54.354 { 00:11:54.354 "code": -32602, 00:11:54.354 "message": "Invalid parameters" 00:11:54.354 } 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.354 [ 0]:0x2 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9f7cc8c92cf1479a86614873a251456c 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9f7cc8c92cf1479a86614873a251456c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:54.354 12:15:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.616 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.616 rmmod nvme_tcp 00:11:54.616 rmmod nvme_fabrics 00:11:54.877 rmmod nvme_keyring 00:11:54.877 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 542486 ']' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 542486 ']' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 542486' 00:11:54.878 killing process with pid 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 542486 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.878 12:16:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.432 12:16:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.432 00:11:57.432 real 0m21.985s 00:11:57.432 user 0m50.162s 00:11:57.432 sys 0m7.495s 00:11:57.432 12:16:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:57.432 12:16:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.432 ************************************ 00:11:57.432 END TEST nvmf_ns_masking 00:11:57.432 ************************************ 00:11:57.432 12:16:02 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:57.432 12:16:02 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.432 12:16:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:57.432 12:16:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:57.432 12:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:57.432 ************************************ 00:11:57.432 START TEST nvmf_nvme_cli 00:11:57.432 ************************************ 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.432 * Looking for test storage... 00:11:57.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.432 12:16:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.433 12:16:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:05.576 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:05.576 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:05.576 Found net devices under 0000:31:00.0: cvl_0_0 00:12:05.576 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:05.577 Found net devices under 0000:31:00.1: cvl_0_1 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:12:05.577 00:12:05.577 --- 10.0.0.2 ping statistics --- 00:12:05.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.577 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:05.577 00:12:05.577 --- 10.0.0.1 ping statistics --- 00:12:05.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.577 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=549505 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 549505 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 549505 ']' 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:05.577 12:16:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:05.577 [2024-06-10 12:16:11.045934] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:05.577 [2024-06-10 12:16:11.046028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.577 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.577 [2024-06-10 12:16:11.127967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.839 [2024-06-10 12:16:11.203637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.839 [2024-06-10 12:16:11.203676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.839 [2024-06-10 12:16:11.203689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.839 [2024-06-10 12:16:11.203695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.839 [2024-06-10 12:16:11.203701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.839 [2024-06-10 12:16:11.203835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.839 [2024-06-10 12:16:11.203951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.839 [2024-06-10 12:16:11.204107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.839 [2024-06-10 12:16:11.204108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 [2024-06-10 12:16:11.871741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 Malloc0 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 Malloc1 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 [2024-06-10 12:16:11.961372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.412 12:16:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:12:06.677 00:12:06.677 Discovery Log Number of Records 2, Generation counter 2 00:12:06.677 =====Discovery Log Entry 0====== 00:12:06.677 trtype: tcp 00:12:06.677 adrfam: ipv4 00:12:06.677 subtype: current discovery subsystem 00:12:06.677 treq: not required 00:12:06.677 portid: 0 00:12:06.677 trsvcid: 4420 00:12:06.677 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:06.677 traddr: 10.0.0.2 00:12:06.677 eflags: explicit discovery connections, duplicate discovery information 00:12:06.677 sectype: none 00:12:06.677 =====Discovery Log Entry 1====== 00:12:06.677 trtype: tcp 00:12:06.677 adrfam: ipv4 00:12:06.677 subtype: nvme subsystem 00:12:06.677 treq: not required 00:12:06.677 portid: 0 00:12:06.677 trsvcid: 4420 00:12:06.677 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:06.677 traddr: 10.0.0.2 00:12:06.677 eflags: none 00:12:06.677 sectype: none 00:12:06.677 12:16:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:06.677 12:16:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:06.677 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:06.677 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.677 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:06.678 12:16:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:08.068 12:16:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.065 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:10.326 /dev/nvme0n1 ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:10.326 12:16:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:10.587 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:10.587 12:16:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.587 12:16:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.587 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:12:10.587 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:10.587 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.849 rmmod nvme_tcp 00:12:10.849 rmmod nvme_fabrics 00:12:10.849 rmmod nvme_keyring 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 549505 ']' 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 549505 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 549505 ']' 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 549505 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 549505 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 549505' 00:12:10.849 killing process with pid 549505 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 549505 00:12:10.849 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 549505 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.110 12:16:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.025 12:16:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.025 00:12:13.025 real 0m15.952s 00:12:13.025 user 0m23.534s 00:12:13.025 sys 0m6.614s 00:12:13.025 12:16:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:13.025 12:16:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.025 ************************************ 00:12:13.025 END TEST nvmf_nvme_cli 00:12:13.025 ************************************ 00:12:13.025 12:16:18 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:13.025 12:16:18 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:13.025 12:16:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:13.025 12:16:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:13.025 12:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.286 ************************************ 00:12:13.286 START TEST nvmf_vfio_user 00:12:13.286 ************************************ 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:13.286 * Looking for test storage... 00:12:13.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.286 12:16:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=551267 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 551267' 00:12:13.287 Process pid: 551267 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 551267 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 551267 ']' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:13.287 12:16:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:13.287 [2024-06-10 12:16:18.845403] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:13.287 [2024-06-10 12:16:18.845475] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.287 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.547 [2024-06-10 12:16:18.917420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.547 [2024-06-10 12:16:18.993215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.547 [2024-06-10 12:16:18.993256] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.547 [2024-06-10 12:16:18.993263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.547 [2024-06-10 12:16:18.993270] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.547 [2024-06-10 12:16:18.993276] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.547 [2024-06-10 12:16:18.993419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.547 [2024-06-10 12:16:18.993535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.547 [2024-06-10 12:16:18.993727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.547 [2024-06-10 12:16:18.993729] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.116 12:16:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:14.116 12:16:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:14.116 12:16:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:15.059 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:15.321 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:15.321 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:15.321 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:15.321 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:15.321 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:15.581 Malloc1 00:12:15.581 12:16:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:15.581 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:15.841 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:16.100 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:16.100 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:16.100 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:16.100 Malloc2 00:12:16.101 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:16.360 12:16:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:16.620 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:16.620 [2024-06-10 12:16:22.199341] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:16.620 [2024-06-10 12:16:22.199385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551958 ] 00:12:16.620 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.883 [2024-06-10 12:16:22.229845] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:16.883 [2024-06-10 12:16:22.234278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:16.883 [2024-06-10 12:16:22.234300] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6c5859e000 00:12:16.883 [2024-06-10 12:16:22.235273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.236273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.237275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.238283] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.239298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.240293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.241307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.242315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.883 [2024-06-10 12:16:22.243325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:16.883 [2024-06-10 12:16:22.243337] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6c58593000 00:12:16.883 [2024-06-10 12:16:22.244664] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:16.883 [2024-06-10 12:16:22.266350] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:16.883 [2024-06-10 12:16:22.266378] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:16.883 [2024-06-10 12:16:22.268477] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:16.883 [2024-06-10 12:16:22.268526] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:16.883 [2024-06-10 12:16:22.268616] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:16.883 [2024-06-10 12:16:22.268635] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:16.883 [2024-06-10 12:16:22.268641] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:16.883 [2024-06-10 12:16:22.269476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:16.883 [2024-06-10 12:16:22.269485] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:16.883 [2024-06-10 12:16:22.269492] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:16.883 [2024-06-10 12:16:22.270486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:16.883 [2024-06-10 12:16:22.270494] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:16.883 [2024-06-10 12:16:22.270501] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.271492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:16.883 [2024-06-10 12:16:22.271500] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.272497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:16.883 [2024-06-10 12:16:22.272505] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:16.883 [2024-06-10 12:16:22.272510] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.272517] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.272622] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:16.883 [2024-06-10 12:16:22.272627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.272632] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:16.883 [2024-06-10 12:16:22.273518] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:16.883 [2024-06-10 12:16:22.274505] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:16.883 [2024-06-10 12:16:22.275514] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:16.883 [2024-06-10 12:16:22.276512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.883 [2024-06-10 12:16:22.276564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:16.883 [2024-06-10 12:16:22.277522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:16.883 [2024-06-10 12:16:22.277530] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:16.883 [2024-06-10 12:16:22.277535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:16.883 [2024-06-10 12:16:22.277556] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:16.883 [2024-06-10 12:16:22.277563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:16.883 [2024-06-10 12:16:22.277582] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.883 [2024-06-10 12:16:22.277587] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.883 [2024-06-10 12:16:22.277600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.883 [2024-06-10 12:16:22.277633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:16.883 [2024-06-10 12:16:22.277643] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:16.883 [2024-06-10 12:16:22.277648] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:16.883 [2024-06-10 12:16:22.277652] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:16.883 [2024-06-10 12:16:22.277659] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:16.883 [2024-06-10 12:16:22.277666] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:16.883 [2024-06-10 12:16:22.277671] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:16.883 [2024-06-10 12:16:22.277675] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:16.883 [2024-06-10 12:16:22.277683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:16.883 [2024-06-10 12:16:22.277693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:16.883 [2024-06-10 12:16:22.277702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:16.883 [2024-06-10 12:16:22.277713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.884 [2024-06-10 12:16:22.277721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.884 [2024-06-10 12:16:22.277729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.884 [2024-06-10 12:16:22.277737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.884 [2024-06-10 12:16:22.277742] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.277776] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:16.884 [2024-06-10 12:16:22.277781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277793] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.277814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.277863] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277871] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277879] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:16.884 [2024-06-10 12:16:22.277883] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:16.884 [2024-06-10 12:16:22.277889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.277901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.277910] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:16.884 [2024-06-10 12:16:22.277919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277933] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.884 [2024-06-10 12:16:22.277937] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.884 [2024-06-10 12:16:22.277943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.277960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.277973] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.277987] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.884 [2024-06-10 12:16:22.277991] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.884 [2024-06-10 12:16:22.277997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278020] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278045] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278050] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:16.884 [2024-06-10 12:16:22.278055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:16.884 [2024-06-10 12:16:22.278060] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:16.884 [2024-06-10 12:16:22.278081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278159] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:16.884 [2024-06-10 12:16:22.278164] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:16.884 [2024-06-10 12:16:22.278167] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:16.884 [2024-06-10 12:16:22.278171] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:16.884 [2024-06-10 12:16:22.278177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:16.884 [2024-06-10 12:16:22.278184] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:16.884 [2024-06-10 12:16:22.278188] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:16.884 [2024-06-10 12:16:22.278198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278205] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:16.884 [2024-06-10 12:16:22.278209] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.884 [2024-06-10 12:16:22.278215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278222] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:16.884 [2024-06-10 12:16:22.278226] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:16.884 [2024-06-10 12:16:22.278232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:16.884 [2024-06-10 12:16:22.278239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:16.884 [2024-06-10 12:16:22.278270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:16.884 ===================================================== 00:12:16.884 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:16.884 ===================================================== 00:12:16.884 Controller Capabilities/Features 00:12:16.884 ================================ 00:12:16.884 Vendor ID: 4e58 00:12:16.884 Subsystem Vendor ID: 4e58 00:12:16.884 Serial Number: SPDK1 00:12:16.884 Model Number: SPDK bdev Controller 00:12:16.884 Firmware Version: 24.09 00:12:16.884 Recommended Arb Burst: 6 00:12:16.884 IEEE OUI Identifier: 8d 6b 50 00:12:16.884 Multi-path I/O 00:12:16.884 May have multiple subsystem ports: Yes 00:12:16.884 May have multiple controllers: Yes 00:12:16.884 Associated with SR-IOV VF: No 00:12:16.884 Max Data Transfer Size: 131072 00:12:16.884 Max Number of Namespaces: 32 00:12:16.884 Max Number of I/O Queues: 127 00:12:16.884 NVMe Specification Version (VS): 1.3 00:12:16.884 NVMe Specification Version (Identify): 1.3 00:12:16.884 Maximum Queue Entries: 256 00:12:16.884 Contiguous Queues Required: Yes 00:12:16.884 Arbitration Mechanisms Supported 00:12:16.884 Weighted Round Robin: Not Supported 00:12:16.884 Vendor Specific: Not Supported 00:12:16.884 Reset Timeout: 15000 ms 00:12:16.885 Doorbell Stride: 4 bytes 00:12:16.885 NVM Subsystem Reset: Not Supported 00:12:16.885 Command Sets Supported 00:12:16.885 NVM Command Set: Supported 00:12:16.885 Boot Partition: Not Supported 00:12:16.885 Memory Page Size Minimum: 4096 bytes 00:12:16.885 Memory Page Size Maximum: 4096 bytes 00:12:16.885 Persistent Memory Region: Not Supported 00:12:16.885 Optional Asynchronous Events Supported 00:12:16.885 Namespace Attribute Notices: Supported 00:12:16.885 Firmware Activation Notices: Not Supported 00:12:16.885 ANA Change Notices: Not Supported 00:12:16.885 PLE Aggregate Log Change Notices: Not Supported 00:12:16.885 LBA Status Info Alert Notices: Not Supported 00:12:16.885 EGE Aggregate Log Change Notices: Not Supported 00:12:16.885 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.885 Zone Descriptor Change Notices: Not Supported 00:12:16.885 Discovery Log Change Notices: Not Supported 00:12:16.885 Controller Attributes 00:12:16.885 128-bit Host Identifier: Supported 00:12:16.885 Non-Operational Permissive Mode: Not Supported 00:12:16.885 NVM Sets: Not Supported 00:12:16.885 Read Recovery Levels: Not Supported 00:12:16.885 Endurance Groups: Not Supported 00:12:16.885 Predictable Latency Mode: Not Supported 00:12:16.885 Traffic Based Keep ALive: Not Supported 00:12:16.885 Namespace Granularity: Not Supported 00:12:16.885 SQ Associations: Not Supported 00:12:16.885 UUID List: Not Supported 00:12:16.885 Multi-Domain Subsystem: Not Supported 00:12:16.885 Fixed Capacity Management: Not Supported 00:12:16.885 Variable Capacity Management: Not Supported 00:12:16.885 Delete Endurance Group: Not Supported 00:12:16.885 Delete NVM Set: Not Supported 00:12:16.885 Extended LBA Formats Supported: Not Supported 00:12:16.885 Flexible Data Placement Supported: Not Supported 00:12:16.885 00:12:16.885 Controller Memory Buffer Support 00:12:16.885 ================================ 00:12:16.885 Supported: No 00:12:16.885 00:12:16.885 Persistent Memory Region Support 00:12:16.885 ================================ 00:12:16.885 Supported: No 00:12:16.885 00:12:16.885 Admin Command Set Attributes 00:12:16.885 ============================ 00:12:16.885 Security Send/Receive: Not Supported 00:12:16.885 Format NVM: Not Supported 00:12:16.885 Firmware Activate/Download: Not Supported 00:12:16.885 Namespace Management: Not Supported 00:12:16.885 Device Self-Test: Not Supported 00:12:16.885 Directives: Not Supported 00:12:16.885 NVMe-MI: Not Supported 00:12:16.885 Virtualization Management: Not Supported 00:12:16.885 Doorbell Buffer Config: Not Supported 00:12:16.885 Get LBA Status Capability: Not Supported 00:12:16.885 Command & Feature Lockdown Capability: Not Supported 00:12:16.885 Abort Command Limit: 4 00:12:16.885 Async Event Request Limit: 4 00:12:16.885 Number of Firmware Slots: N/A 00:12:16.885 Firmware Slot 1 Read-Only: N/A 00:12:16.885 Firmware Activation Without Reset: N/A 00:12:16.885 Multiple Update Detection Support: N/A 00:12:16.885 Firmware Update Granularity: No Information Provided 00:12:16.885 Per-Namespace SMART Log: No 00:12:16.885 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.885 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:16.885 Command Effects Log Page: Supported 00:12:16.885 Get Log Page Extended Data: Supported 00:12:16.885 Telemetry Log Pages: Not Supported 00:12:16.885 Persistent Event Log Pages: Not Supported 00:12:16.885 Supported Log Pages Log Page: May Support 00:12:16.885 Commands Supported & Effects Log Page: Not Supported 00:12:16.885 Feature Identifiers & Effects Log Page:May Support 00:12:16.885 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.885 Data Area 4 for Telemetry Log: Not Supported 00:12:16.885 Error Log Page Entries Supported: 128 00:12:16.885 Keep Alive: Supported 00:12:16.885 Keep Alive Granularity: 10000 ms 00:12:16.885 00:12:16.885 NVM Command Set Attributes 00:12:16.885 ========================== 00:12:16.885 Submission Queue Entry Size 00:12:16.885 Max: 64 00:12:16.885 Min: 64 00:12:16.885 Completion Queue Entry Size 00:12:16.885 Max: 16 00:12:16.885 Min: 16 00:12:16.885 Number of Namespaces: 32 00:12:16.885 Compare Command: Supported 00:12:16.885 Write Uncorrectable Command: Not Supported 00:12:16.885 Dataset Management Command: Supported 00:12:16.885 Write Zeroes Command: Supported 00:12:16.885 Set Features Save Field: Not Supported 00:12:16.885 Reservations: Not Supported 00:12:16.885 Timestamp: Not Supported 00:12:16.885 Copy: Supported 00:12:16.885 Volatile Write Cache: Present 00:12:16.885 Atomic Write Unit (Normal): 1 00:12:16.885 Atomic Write Unit (PFail): 1 00:12:16.885 Atomic Compare & Write Unit: 1 00:12:16.885 Fused Compare & Write: Supported 00:12:16.885 Scatter-Gather List 00:12:16.885 SGL Command Set: Supported (Dword aligned) 00:12:16.885 SGL Keyed: Not Supported 00:12:16.885 SGL Bit Bucket Descriptor: Not Supported 00:12:16.885 SGL Metadata Pointer: Not Supported 00:12:16.885 Oversized SGL: Not Supported 00:12:16.885 SGL Metadata Address: Not Supported 00:12:16.885 SGL Offset: Not Supported 00:12:16.885 Transport SGL Data Block: Not Supported 00:12:16.885 Replay Protected Memory Block: Not Supported 00:12:16.885 00:12:16.885 Firmware Slot Information 00:12:16.885 ========================= 00:12:16.885 Active slot: 1 00:12:16.885 Slot 1 Firmware Revision: 24.09 00:12:16.885 00:12:16.885 00:12:16.885 Commands Supported and Effects 00:12:16.885 ============================== 00:12:16.885 Admin Commands 00:12:16.885 -------------- 00:12:16.885 Get Log Page (02h): Supported 00:12:16.885 Identify (06h): Supported 00:12:16.885 Abort (08h): Supported 00:12:16.885 Set Features (09h): Supported 00:12:16.885 Get Features (0Ah): Supported 00:12:16.885 Asynchronous Event Request (0Ch): Supported 00:12:16.885 Keep Alive (18h): Supported 00:12:16.885 I/O Commands 00:12:16.885 ------------ 00:12:16.885 Flush (00h): Supported LBA-Change 00:12:16.885 Write (01h): Supported LBA-Change 00:12:16.885 Read (02h): Supported 00:12:16.885 Compare (05h): Supported 00:12:16.885 Write Zeroes (08h): Supported LBA-Change 00:12:16.885 Dataset Management (09h): Supported LBA-Change 00:12:16.885 Copy (19h): Supported LBA-Change 00:12:16.885 Unknown (79h): Supported LBA-Change 00:12:16.885 Unknown (7Ah): Supported 00:12:16.885 00:12:16.885 Error Log 00:12:16.885 ========= 00:12:16.885 00:12:16.885 Arbitration 00:12:16.885 =========== 00:12:16.885 Arbitration Burst: 1 00:12:16.885 00:12:16.885 Power Management 00:12:16.885 ================ 00:12:16.885 Number of Power States: 1 00:12:16.885 Current Power State: Power State #0 00:12:16.885 Power State #0: 00:12:16.885 Max Power: 0.00 W 00:12:16.885 Non-Operational State: Operational 00:12:16.885 Entry Latency: Not Reported 00:12:16.885 Exit Latency: Not Reported 00:12:16.885 Relative Read Throughput: 0 00:12:16.885 Relative Read Latency: 0 00:12:16.885 Relative Write Throughput: 0 00:12:16.885 Relative Write Latency: 0 00:12:16.885 Idle Power: Not Reported 00:12:16.885 Active Power: Not Reported 00:12:16.885 Non-Operational Permissive Mode: Not Supported 00:12:16.885 00:12:16.885 Health Information 00:12:16.885 ================== 00:12:16.885 Critical Warnings: 00:12:16.885 Available Spare Space: OK 00:12:16.885 Temperature: OK 00:12:16.885 Device Reliability: OK 00:12:16.885 Read Only: No 00:12:16.885 Volatile Memory Backup: OK 00:12:16.885 Current Temperature: 0 Kelvin (-2[2024-06-10 12:16:22.278372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:16.885 [2024-06-10 12:16:22.278381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:16.885 [2024-06-10 12:16:22.278406] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:16.885 [2024-06-10 12:16:22.278415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.885 [2024-06-10 12:16:22.278421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.885 [2024-06-10 12:16:22.278429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.885 [2024-06-10 12:16:22.278435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.885 [2024-06-10 12:16:22.278531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:16.885 [2024-06-10 12:16:22.278541] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:16.886 [2024-06-10 12:16:22.279529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.886 [2024-06-10 12:16:22.279569] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:16.886 [2024-06-10 12:16:22.279575] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:16.886 [2024-06-10 12:16:22.280540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:16.886 [2024-06-10 12:16:22.280551] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:16.886 [2024-06-10 12:16:22.280615] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:16.886 [2024-06-10 12:16:22.286201] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:16.886 73 Celsius) 00:12:16.886 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:16.886 Available Spare: 0% 00:12:16.886 Available Spare Threshold: 0% 00:12:16.886 Life Percentage Used: 0% 00:12:16.886 Data Units Read: 0 00:12:16.886 Data Units Written: 0 00:12:16.886 Host Read Commands: 0 00:12:16.886 Host Write Commands: 0 00:12:16.886 Controller Busy Time: 0 minutes 00:12:16.886 Power Cycles: 0 00:12:16.886 Power On Hours: 0 hours 00:12:16.886 Unsafe Shutdowns: 0 00:12:16.886 Unrecoverable Media Errors: 0 00:12:16.886 Lifetime Error Log Entries: 0 00:12:16.886 Warning Temperature Time: 0 minutes 00:12:16.886 Critical Temperature Time: 0 minutes 00:12:16.886 00:12:16.886 Number of Queues 00:12:16.886 ================ 00:12:16.886 Number of I/O Submission Queues: 127 00:12:16.886 Number of I/O Completion Queues: 127 00:12:16.886 00:12:16.886 Active Namespaces 00:12:16.886 ================= 00:12:16.886 Namespace ID:1 00:12:16.886 Error Recovery Timeout: Unlimited 00:12:16.886 Command Set Identifier: NVM (00h) 00:12:16.886 Deallocate: Supported 00:12:16.886 Deallocated/Unwritten Error: Not Supported 00:12:16.886 Deallocated Read Value: Unknown 00:12:16.886 Deallocate in Write Zeroes: Not Supported 00:12:16.886 Deallocated Guard Field: 0xFFFF 00:12:16.886 Flush: Supported 00:12:16.886 Reservation: Supported 00:12:16.886 Namespace Sharing Capabilities: Multiple Controllers 00:12:16.886 Size (in LBAs): 131072 (0GiB) 00:12:16.886 Capacity (in LBAs): 131072 (0GiB) 00:12:16.886 Utilization (in LBAs): 131072 (0GiB) 00:12:16.886 NGUID: 2BC099E587304406B3A37B46142AA374 00:12:16.886 UUID: 2bc099e5-8730-4406-b3a3-7b46142aa374 00:12:16.886 Thin Provisioning: Not Supported 00:12:16.886 Per-NS Atomic Units: Yes 00:12:16.886 Atomic Boundary Size (Normal): 0 00:12:16.886 Atomic Boundary Size (PFail): 0 00:12:16.886 Atomic Boundary Offset: 0 00:12:16.886 Maximum Single Source Range Length: 65535 00:12:16.886 Maximum Copy Length: 65535 00:12:16.886 Maximum Source Range Count: 1 00:12:16.886 NGUID/EUI64 Never Reused: No 00:12:16.886 Namespace Write Protected: No 00:12:16.886 Number of LBA Formats: 1 00:12:16.886 Current LBA Format: LBA Format #00 00:12:16.886 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.886 00:12:16.886 12:16:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:16.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.886 [2024-06-10 12:16:22.469840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.169 Initializing NVMe Controllers 00:12:22.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:22.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:22.169 Initialization complete. Launching workers. 00:12:22.169 ======================================================== 00:12:22.169 Latency(us) 00:12:22.169 Device Information : IOPS MiB/s Average min max 00:12:22.169 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39974.98 156.15 3202.20 832.66 6849.19 00:12:22.169 ======================================================== 00:12:22.169 Total : 39974.98 156.15 3202.20 832.66 6849.19 00:12:22.169 00:12:22.169 [2024-06-10 12:16:27.490452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.169 12:16:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:22.169 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.169 [2024-06-10 12:16:27.672323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.459 Initializing NVMe Controllers 00:12:27.459 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:27.459 Initialization complete. Launching workers. 00:12:27.459 ======================================================== 00:12:27.459 Latency(us) 00:12:27.459 Device Information : IOPS MiB/s Average min max 00:12:27.459 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.74 62.71 7979.26 6161.93 8803.53 00:12:27.459 ======================================================== 00:12:27.459 Total : 16052.74 62.71 7979.26 6161.93 8803.53 00:12:27.459 00:12:27.459 [2024-06-10 12:16:32.714086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.459 12:16:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:27.459 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.459 [2024-06-10 12:16:32.912985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.743 [2024-06-10 12:16:37.985395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.743 Initializing NVMe Controllers 00:12:32.743 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.743 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:32.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:32.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:32.743 Initialization complete. Launching workers. 00:12:32.743 Starting thread on core 2 00:12:32.743 Starting thread on core 3 00:12:32.743 Starting thread on core 1 00:12:32.743 12:16:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:32.743 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.743 [2024-06-10 12:16:38.248111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.113 [2024-06-10 12:16:41.419341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.113 Initializing NVMe Controllers 00:12:36.113 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.113 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.113 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:36.113 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:36.113 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:36.113 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:36.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:36.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:36.113 Initialization complete. Launching workers. 00:12:36.113 Starting thread on core 1 with urgent priority queue 00:12:36.113 Starting thread on core 2 with urgent priority queue 00:12:36.113 Starting thread on core 3 with urgent priority queue 00:12:36.113 Starting thread on core 0 with urgent priority queue 00:12:36.113 SPDK bdev Controller (SPDK1 ) core 0: 7295.33 IO/s 13.71 secs/100000 ios 00:12:36.113 SPDK bdev Controller (SPDK1 ) core 1: 6949.67 IO/s 14.39 secs/100000 ios 00:12:36.113 SPDK bdev Controller (SPDK1 ) core 2: 5691.33 IO/s 17.57 secs/100000 ios 00:12:36.113 SPDK bdev Controller (SPDK1 ) core 3: 7292.33 IO/s 13.71 secs/100000 ios 00:12:36.113 ======================================================== 00:12:36.113 00:12:36.113 12:16:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:36.113 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.113 [2024-06-10 12:16:41.690658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.373 Initializing NVMe Controllers 00:12:36.373 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.373 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.373 Namespace ID: 1 size: 0GB 00:12:36.373 Initialization complete. 00:12:36.373 INFO: using host memory buffer for IO 00:12:36.373 Hello world! 00:12:36.373 [2024-06-10 12:16:41.724898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.373 12:16:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:36.373 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.634 [2024-06-10 12:16:41.987269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.576 Initializing NVMe Controllers 00:12:37.576 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.576 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.576 Initialization complete. Launching workers. 00:12:37.576 submit (in ns) avg, min, max = 8973.2, 3945.0, 6993472.5 00:12:37.576 complete (in ns) avg, min, max = 16842.6, 2377.5, 5992880.8 00:12:37.576 00:12:37.576 Submit histogram 00:12:37.576 ================ 00:12:37.576 Range in us Cumulative Count 00:12:37.576 3.920 - 3.947: 0.0463% ( 9) 00:12:37.576 3.947 - 3.973: 4.6592% ( 896) 00:12:37.576 3.973 - 4.000: 12.7780% ( 1577) 00:12:37.576 4.000 - 4.027: 23.6048% ( 2103) 00:12:37.576 4.027 - 4.053: 34.6890% ( 2153) 00:12:37.576 4.053 - 4.080: 48.6100% ( 2704) 00:12:37.576 4.080 - 4.107: 64.0393% ( 2997) 00:12:37.576 4.107 - 4.133: 79.5305% ( 3009) 00:12:37.576 4.133 - 4.160: 90.5787% ( 2146) 00:12:37.576 4.160 - 4.187: 95.7887% ( 1012) 00:12:37.576 4.187 - 4.213: 98.0745% ( 444) 00:12:37.576 4.213 - 4.240: 99.0321% ( 186) 00:12:37.576 4.240 - 4.267: 99.3513% ( 62) 00:12:37.576 4.267 - 4.293: 99.4491% ( 19) 00:12:37.576 4.293 - 4.320: 99.4697% ( 4) 00:12:37.576 4.320 - 4.347: 99.4749% ( 1) 00:12:37.576 4.400 - 4.427: 99.4800% ( 1) 00:12:37.576 4.453 - 4.480: 99.4852% ( 1) 00:12:37.576 4.507 - 4.533: 99.4903% ( 1) 00:12:37.576 4.587 - 4.613: 99.4955% ( 1) 00:12:37.576 4.613 - 4.640: 99.5006% ( 1) 00:12:37.576 4.747 - 4.773: 99.5058% ( 1) 00:12:37.576 4.800 - 4.827: 99.5109% ( 1) 00:12:37.576 4.960 - 4.987: 99.5161% ( 1) 00:12:37.576 5.120 - 5.147: 99.5264% ( 2) 00:12:37.576 5.333 - 5.360: 99.5315% ( 1) 00:12:37.576 5.627 - 5.653: 99.5367% ( 1) 00:12:37.576 5.733 - 5.760: 99.5418% ( 1) 00:12:37.576 5.893 - 5.920: 99.5470% ( 1) 00:12:37.576 6.000 - 6.027: 99.5572% ( 2) 00:12:37.576 6.027 - 6.053: 99.5624% ( 1) 00:12:37.576 6.053 - 6.080: 99.5675% ( 1) 00:12:37.576 6.080 - 6.107: 99.5830% ( 3) 00:12:37.576 6.133 - 6.160: 99.5933% ( 2) 00:12:37.576 6.160 - 6.187: 99.6087% ( 3) 00:12:37.576 6.240 - 6.267: 99.6139% ( 1) 00:12:37.576 6.293 - 6.320: 99.6293% ( 3) 00:12:37.576 6.347 - 6.373: 99.6448% ( 3) 00:12:37.576 6.400 - 6.427: 99.6499% ( 1) 00:12:37.576 6.427 - 6.453: 99.6602% ( 2) 00:12:37.576 6.453 - 6.480: 99.6654% ( 1) 00:12:37.576 6.480 - 6.507: 99.6757% ( 2) 00:12:37.576 6.507 - 6.533: 99.6808% ( 1) 00:12:37.576 6.533 - 6.560: 99.6860% ( 1) 00:12:37.576 6.560 - 6.587: 99.6911% ( 1) 00:12:37.576 6.613 - 6.640: 99.6963% ( 1) 00:12:37.576 6.640 - 6.667: 99.7014% ( 1) 00:12:37.576 6.693 - 6.720: 99.7065% ( 1) 00:12:37.576 6.720 - 6.747: 99.7117% ( 1) 00:12:37.576 6.747 - 6.773: 99.7168% ( 1) 00:12:37.576 6.773 - 6.800: 99.7220% ( 1) 00:12:37.576 6.800 - 6.827: 99.7271% ( 1) 00:12:37.576 6.827 - 6.880: 99.7426% ( 3) 00:12:37.576 6.880 - 6.933: 99.7529% ( 2) 00:12:37.576 7.040 - 7.093: 99.7580% ( 1) 00:12:37.576 7.093 - 7.147: 99.7632% ( 1) 00:12:37.576 7.253 - 7.307: 99.7786% ( 3) 00:12:37.576 7.307 - 7.360: 99.7838% ( 1) 00:12:37.576 7.360 - 7.413: 99.7941% ( 2) 00:12:37.576 7.413 - 7.467: 99.7992% ( 1) 00:12:37.576 7.467 - 7.520: 99.8044% ( 1) 00:12:37.576 7.573 - 7.627: 99.8095% ( 1) 00:12:37.576 7.680 - 7.733: 99.8147% ( 1) 00:12:37.576 7.787 - 7.840: 99.8250% ( 2) 00:12:37.576 7.840 - 7.893: 99.8353% ( 2) 00:12:37.576 7.893 - 7.947: 99.8404% ( 1) 00:12:37.576 8.053 - 8.107: 99.8456% ( 1) 00:12:37.576 8.107 - 8.160: 99.8507% ( 1) 00:12:37.576 8.160 - 8.213: 99.8558% ( 1) 00:12:37.576 8.267 - 8.320: 99.8610% ( 1) 00:12:37.576 8.373 - 8.427: 99.8661% ( 1) 00:12:37.576 8.587 - 8.640: 99.8713% ( 1) 00:12:37.576 9.333 - 9.387: 99.8764% ( 1) 00:12:37.576 12.320 - 12.373: 99.8816% ( 1) 00:12:37.576 3986.773 - 4014.080: 99.9949% ( 22) 00:12:37.576 6990.507 - 7045.120: 100.0000% ( 1) 00:12:37.576 00:12:37.576 Complete histogram 00:12:37.576 ================== 00:12:37.576 Range in us Cumulative Count 00:12:37.576 2.373 - 2.387: 0.0051% ( 1) 00:12:37.576 2.387 - [2024-06-10 12:16:43.011884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.576 2.400: 0.0360% ( 6) 00:12:37.576 2.400 - 2.413: 1.4261% ( 270) 00:12:37.576 2.413 - 2.427: 1.5908% ( 32) 00:12:37.576 2.427 - 2.440: 1.8843% ( 57) 00:12:37.576 2.440 - 2.453: 1.9512% ( 13) 00:12:37.576 2.453 - 2.467: 22.4825% ( 3988) 00:12:37.576 2.467 - 2.480: 57.6400% ( 6829) 00:12:37.576 2.480 - 2.493: 65.3933% ( 1506) 00:12:37.576 2.493 - 2.507: 74.5521% ( 1779) 00:12:37.576 2.507 - 2.520: 79.6695% ( 994) 00:12:37.576 2.520 - 2.533: 81.7082% ( 396) 00:12:37.576 2.533 - 2.547: 87.3764% ( 1101) 00:12:37.576 2.547 - 2.560: 93.4205% ( 1174) 00:12:37.576 2.560 - 2.573: 96.4374% ( 586) 00:12:37.576 2.573 - 2.587: 98.0540% ( 314) 00:12:37.576 2.587 - 2.600: 98.9806% ( 180) 00:12:37.576 2.600 - 2.613: 99.3565% ( 73) 00:12:37.576 2.613 - 2.627: 99.4182% ( 12) 00:12:37.576 2.627 - 2.640: 99.4440% ( 5) 00:12:37.576 2.640 - 2.653: 99.4543% ( 2) 00:12:37.576 2.653 - 2.667: 99.4646% ( 2) 00:12:37.576 2.840 - 2.853: 99.4749% ( 2) 00:12:37.576 2.867 - 2.880: 99.4800% ( 1) 00:12:37.576 4.240 - 4.267: 99.4852% ( 1) 00:12:37.576 4.373 - 4.400: 99.4903% ( 1) 00:12:37.576 4.587 - 4.613: 99.4955% ( 1) 00:12:37.576 4.693 - 4.720: 99.5058% ( 2) 00:12:37.576 4.773 - 4.800: 99.5109% ( 1) 00:12:37.576 4.827 - 4.853: 99.5161% ( 1) 00:12:37.576 4.907 - 4.933: 99.5212% ( 1) 00:12:37.576 4.960 - 4.987: 99.5264% ( 1) 00:12:37.576 5.040 - 5.067: 99.5315% ( 1) 00:12:37.576 5.093 - 5.120: 99.5367% ( 1) 00:12:37.576 5.173 - 5.200: 99.5470% ( 2) 00:12:37.576 5.200 - 5.227: 99.5521% ( 1) 00:12:37.576 5.280 - 5.307: 99.5572% ( 1) 00:12:37.576 5.440 - 5.467: 99.5624% ( 1) 00:12:37.576 5.493 - 5.520: 99.5675% ( 1) 00:12:37.576 5.573 - 5.600: 99.5727% ( 1) 00:12:37.576 5.600 - 5.627: 99.5778% ( 1) 00:12:37.576 5.627 - 5.653: 99.5830% ( 1) 00:12:37.576 5.760 - 5.787: 99.5881% ( 1) 00:12:37.577 5.840 - 5.867: 99.5933% ( 1) 00:12:37.577 5.947 - 5.973: 99.6036% ( 2) 00:12:37.577 6.080 - 6.107: 99.6087% ( 1) 00:12:37.577 6.240 - 6.267: 99.6139% ( 1) 00:12:37.577 6.267 - 6.293: 99.6190% ( 1) 00:12:37.577 6.667 - 6.693: 99.6242% ( 1) 00:12:37.577 11.307 - 11.360: 99.6293% ( 1) 00:12:37.577 13.440 - 13.493: 99.6345% ( 1) 00:12:37.577 43.947 - 44.160: 99.6396% ( 1) 00:12:37.577 1017.173 - 1024.000: 99.6448% ( 1) 00:12:37.577 3986.773 - 4014.080: 99.9949% ( 68) 00:12:37.577 5980.160 - 6007.467: 100.0000% ( 1) 00:12:37.577 00:12:37.577 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:37.577 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:37.577 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:37.577 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:37.577 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:37.838 [ 00:12:37.838 { 00:12:37.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:37.838 "subtype": "Discovery", 00:12:37.838 "listen_addresses": [], 00:12:37.838 "allow_any_host": true, 00:12:37.838 "hosts": [] 00:12:37.838 }, 00:12:37.838 { 00:12:37.838 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:37.838 "subtype": "NVMe", 00:12:37.838 "listen_addresses": [ 00:12:37.838 { 00:12:37.838 "trtype": "VFIOUSER", 00:12:37.838 "adrfam": "IPv4", 00:12:37.838 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:37.838 "trsvcid": "0" 00:12:37.838 } 00:12:37.838 ], 00:12:37.838 "allow_any_host": true, 00:12:37.838 "hosts": [], 00:12:37.838 "serial_number": "SPDK1", 00:12:37.838 "model_number": "SPDK bdev Controller", 00:12:37.838 "max_namespaces": 32, 00:12:37.838 "min_cntlid": 1, 00:12:37.838 "max_cntlid": 65519, 00:12:37.838 "namespaces": [ 00:12:37.838 { 00:12:37.838 "nsid": 1, 00:12:37.838 "bdev_name": "Malloc1", 00:12:37.838 "name": "Malloc1", 00:12:37.838 "nguid": "2BC099E587304406B3A37B46142AA374", 00:12:37.838 "uuid": "2bc099e5-8730-4406-b3a3-7b46142aa374" 00:12:37.838 } 00:12:37.838 ] 00:12:37.838 }, 00:12:37.838 { 00:12:37.838 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:37.838 "subtype": "NVMe", 00:12:37.838 "listen_addresses": [ 00:12:37.838 { 00:12:37.838 "trtype": "VFIOUSER", 00:12:37.838 "adrfam": "IPv4", 00:12:37.838 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:37.838 "trsvcid": "0" 00:12:37.838 } 00:12:37.838 ], 00:12:37.838 "allow_any_host": true, 00:12:37.838 "hosts": [], 00:12:37.838 "serial_number": "SPDK2", 00:12:37.838 "model_number": "SPDK bdev Controller", 00:12:37.838 "max_namespaces": 32, 00:12:37.838 "min_cntlid": 1, 00:12:37.838 "max_cntlid": 65519, 00:12:37.838 "namespaces": [ 00:12:37.838 { 00:12:37.838 "nsid": 1, 00:12:37.838 "bdev_name": "Malloc2", 00:12:37.838 "name": "Malloc2", 00:12:37.838 "nguid": "4832B3AA3DB94FAE9C61A81B7221D514", 00:12:37.838 "uuid": "4832b3aa-3db9-4fae-9c61-a81b7221d514" 00:12:37.838 } 00:12:37.838 ] 00:12:37.838 } 00:12:37.838 ] 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=555995 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:37.838 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.838 Malloc3 00:12:37.838 [2024-06-10 12:16:43.396629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.838 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:38.099 [2024-06-10 12:16:43.565746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.099 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:38.099 Asynchronous Event Request test 00:12:38.099 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.099 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.099 Registering asynchronous event callbacks... 00:12:38.099 Starting namespace attribute notice tests for all controllers... 00:12:38.099 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:38.099 aer_cb - Changed Namespace 00:12:38.099 Cleaning up... 00:12:38.362 [ 00:12:38.362 { 00:12:38.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.362 "subtype": "Discovery", 00:12:38.362 "listen_addresses": [], 00:12:38.362 "allow_any_host": true, 00:12:38.362 "hosts": [] 00:12:38.362 }, 00:12:38.362 { 00:12:38.362 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:38.362 "subtype": "NVMe", 00:12:38.362 "listen_addresses": [ 00:12:38.362 { 00:12:38.362 "trtype": "VFIOUSER", 00:12:38.362 "adrfam": "IPv4", 00:12:38.362 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:38.362 "trsvcid": "0" 00:12:38.362 } 00:12:38.362 ], 00:12:38.362 "allow_any_host": true, 00:12:38.362 "hosts": [], 00:12:38.362 "serial_number": "SPDK1", 00:12:38.362 "model_number": "SPDK bdev Controller", 00:12:38.362 "max_namespaces": 32, 00:12:38.362 "min_cntlid": 1, 00:12:38.362 "max_cntlid": 65519, 00:12:38.362 "namespaces": [ 00:12:38.362 { 00:12:38.362 "nsid": 1, 00:12:38.362 "bdev_name": "Malloc1", 00:12:38.362 "name": "Malloc1", 00:12:38.362 "nguid": "2BC099E587304406B3A37B46142AA374", 00:12:38.362 "uuid": "2bc099e5-8730-4406-b3a3-7b46142aa374" 00:12:38.362 }, 00:12:38.362 { 00:12:38.362 "nsid": 2, 00:12:38.362 "bdev_name": "Malloc3", 00:12:38.362 "name": "Malloc3", 00:12:38.362 "nguid": "58C45BF0C8DE45CBA827129E65804CF7", 00:12:38.362 "uuid": "58c45bf0-c8de-45cb-a827-129e65804cf7" 00:12:38.362 } 00:12:38.362 ] 00:12:38.362 }, 00:12:38.362 { 00:12:38.362 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:38.362 "subtype": "NVMe", 00:12:38.362 "listen_addresses": [ 00:12:38.362 { 00:12:38.362 "trtype": "VFIOUSER", 00:12:38.362 "adrfam": "IPv4", 00:12:38.363 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:38.363 "trsvcid": "0" 00:12:38.363 } 00:12:38.363 ], 00:12:38.363 "allow_any_host": true, 00:12:38.363 "hosts": [], 00:12:38.363 "serial_number": "SPDK2", 00:12:38.363 "model_number": "SPDK bdev Controller", 00:12:38.363 "max_namespaces": 32, 00:12:38.363 "min_cntlid": 1, 00:12:38.363 "max_cntlid": 65519, 00:12:38.363 "namespaces": [ 00:12:38.363 { 00:12:38.363 "nsid": 1, 00:12:38.363 "bdev_name": "Malloc2", 00:12:38.363 "name": "Malloc2", 00:12:38.363 "nguid": "4832B3AA3DB94FAE9C61A81B7221D514", 00:12:38.363 "uuid": "4832b3aa-3db9-4fae-9c61-a81b7221d514" 00:12:38.363 } 00:12:38.363 ] 00:12:38.363 } 00:12:38.363 ] 00:12:38.363 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 555995 00:12:38.363 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:38.363 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:38.363 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:38.363 12:16:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:38.363 [2024-06-10 12:16:43.789433] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:38.363 [2024-06-10 12:16:43.789474] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556168 ] 00:12:38.363 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.363 [2024-06-10 12:16:43.821783] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:38.363 [2024-06-10 12:16:43.830400] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:38.363 [2024-06-10 12:16:43.830422] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbbc1ddd000 00:12:38.363 [2024-06-10 12:16:43.831405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.832409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.833413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.834426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.835430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.836435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.837440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.838450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.363 [2024-06-10 12:16:43.839456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:38.363 [2024-06-10 12:16:43.839469] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbbc1dd2000 00:12:38.363 [2024-06-10 12:16:43.840794] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:38.363 [2024-06-10 12:16:43.857003] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:38.363 [2024-06-10 12:16:43.857026] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:38.363 [2024-06-10 12:16:43.862117] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:38.363 [2024-06-10 12:16:43.862166] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:38.363 [2024-06-10 12:16:43.862251] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:38.363 [2024-06-10 12:16:43.862267] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:38.363 [2024-06-10 12:16:43.862273] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:38.363 [2024-06-10 12:16:43.863128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:38.363 [2024-06-10 12:16:43.863140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:38.363 [2024-06-10 12:16:43.863147] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:38.363 [2024-06-10 12:16:43.864135] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:38.363 [2024-06-10 12:16:43.864145] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:38.363 [2024-06-10 12:16:43.864152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.865141] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:38.363 [2024-06-10 12:16:43.865150] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.866148] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:38.363 [2024-06-10 12:16:43.866157] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:38.363 [2024-06-10 12:16:43.866162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.866168] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.866274] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:38.363 [2024-06-10 12:16:43.866279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.866284] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:38.363 [2024-06-10 12:16:43.867151] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:38.363 [2024-06-10 12:16:43.868157] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:38.363 [2024-06-10 12:16:43.869166] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:38.363 [2024-06-10 12:16:43.870167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.363 [2024-06-10 12:16:43.870212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:38.363 [2024-06-10 12:16:43.871178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:38.363 [2024-06-10 12:16:43.871186] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:38.363 [2024-06-10 12:16:43.871191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:38.363 [2024-06-10 12:16:43.871216] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:38.363 [2024-06-10 12:16:43.871223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:38.363 [2024-06-10 12:16:43.871239] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.363 [2024-06-10 12:16:43.871245] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.363 [2024-06-10 12:16:43.871257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.363 [2024-06-10 12:16:43.879203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:38.363 [2024-06-10 12:16:43.879215] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:38.363 [2024-06-10 12:16:43.879221] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:38.363 [2024-06-10 12:16:43.879225] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:38.363 [2024-06-10 12:16:43.879232] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:38.363 [2024-06-10 12:16:43.879237] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:38.363 [2024-06-10 12:16:43.879242] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:38.363 [2024-06-10 12:16:43.879246] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:38.363 [2024-06-10 12:16:43.879254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:38.363 [2024-06-10 12:16:43.879265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.887201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.887213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.364 [2024-06-10 12:16:43.887222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.364 [2024-06-10 12:16:43.887230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.364 [2024-06-10 12:16:43.887238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.364 [2024-06-10 12:16:43.887243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.887251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.887260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.895199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.895208] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:38.364 [2024-06-10 12:16:43.895213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.895219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.895227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.895235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.903202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.903257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.903265] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.903272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:38.364 [2024-06-10 12:16:43.903277] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:38.364 [2024-06-10 12:16:43.903283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.911200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.911212] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:38.364 [2024-06-10 12:16:43.911224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.911231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.911238] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.364 [2024-06-10 12:16:43.911242] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.364 [2024-06-10 12:16:43.911248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.919200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.919214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.919222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.919229] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.364 [2024-06-10 12:16:43.919233] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.364 [2024-06-10 12:16:43.919239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.927202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.927212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927245] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:38.364 [2024-06-10 12:16:43.927250] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:38.364 [2024-06-10 12:16:43.927255] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:38.364 [2024-06-10 12:16:43.927273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.935202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.935216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.943201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.943214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.951202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.951215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.959200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:38.364 [2024-06-10 12:16:43.959214] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:38.364 [2024-06-10 12:16:43.959219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:38.364 [2024-06-10 12:16:43.959222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:38.364 [2024-06-10 12:16:43.959225] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:38.364 [2024-06-10 12:16:43.959231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:38.364 [2024-06-10 12:16:43.959239] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:38.364 [2024-06-10 12:16:43.959244] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:38.364 [2024-06-10 12:16:43.959250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.959257] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:38.364 [2024-06-10 12:16:43.959261] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.364 [2024-06-10 12:16:43.959267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.364 [2024-06-10 12:16:43.959275] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:38.364 [2024-06-10 12:16:43.959279] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:38.364 [2024-06-10 12:16:43.959285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:38.626 [2024-06-10 12:16:43.967202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:38.626 [2024-06-10 12:16:43.967221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:38.626 [2024-06-10 12:16:43.967230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:38.626 [2024-06-10 12:16:43.967238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:38.626 ===================================================== 00:12:38.626 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:38.626 ===================================================== 00:12:38.626 Controller Capabilities/Features 00:12:38.626 ================================ 00:12:38.626 Vendor ID: 4e58 00:12:38.626 Subsystem Vendor ID: 4e58 00:12:38.626 Serial Number: SPDK2 00:12:38.626 Model Number: SPDK bdev Controller 00:12:38.626 Firmware Version: 24.09 00:12:38.626 Recommended Arb Burst: 6 00:12:38.626 IEEE OUI Identifier: 8d 6b 50 00:12:38.626 Multi-path I/O 00:12:38.626 May have multiple subsystem ports: Yes 00:12:38.626 May have multiple controllers: Yes 00:12:38.626 Associated with SR-IOV VF: No 00:12:38.626 Max Data Transfer Size: 131072 00:12:38.626 Max Number of Namespaces: 32 00:12:38.626 Max Number of I/O Queues: 127 00:12:38.626 NVMe Specification Version (VS): 1.3 00:12:38.626 NVMe Specification Version (Identify): 1.3 00:12:38.626 Maximum Queue Entries: 256 00:12:38.626 Contiguous Queues Required: Yes 00:12:38.626 Arbitration Mechanisms Supported 00:12:38.626 Weighted Round Robin: Not Supported 00:12:38.626 Vendor Specific: Not Supported 00:12:38.626 Reset Timeout: 15000 ms 00:12:38.626 Doorbell Stride: 4 bytes 00:12:38.626 NVM Subsystem Reset: Not Supported 00:12:38.626 Command Sets Supported 00:12:38.626 NVM Command Set: Supported 00:12:38.626 Boot Partition: Not Supported 00:12:38.626 Memory Page Size Minimum: 4096 bytes 00:12:38.626 Memory Page Size Maximum: 4096 bytes 00:12:38.626 Persistent Memory Region: Not Supported 00:12:38.626 Optional Asynchronous Events Supported 00:12:38.626 Namespace Attribute Notices: Supported 00:12:38.626 Firmware Activation Notices: Not Supported 00:12:38.626 ANA Change Notices: Not Supported 00:12:38.626 PLE Aggregate Log Change Notices: Not Supported 00:12:38.626 LBA Status Info Alert Notices: Not Supported 00:12:38.626 EGE Aggregate Log Change Notices: Not Supported 00:12:38.626 Normal NVM Subsystem Shutdown event: Not Supported 00:12:38.626 Zone Descriptor Change Notices: Not Supported 00:12:38.626 Discovery Log Change Notices: Not Supported 00:12:38.626 Controller Attributes 00:12:38.626 128-bit Host Identifier: Supported 00:12:38.626 Non-Operational Permissive Mode: Not Supported 00:12:38.626 NVM Sets: Not Supported 00:12:38.626 Read Recovery Levels: Not Supported 00:12:38.626 Endurance Groups: Not Supported 00:12:38.626 Predictable Latency Mode: Not Supported 00:12:38.626 Traffic Based Keep ALive: Not Supported 00:12:38.626 Namespace Granularity: Not Supported 00:12:38.626 SQ Associations: Not Supported 00:12:38.626 UUID List: Not Supported 00:12:38.627 Multi-Domain Subsystem: Not Supported 00:12:38.627 Fixed Capacity Management: Not Supported 00:12:38.627 Variable Capacity Management: Not Supported 00:12:38.627 Delete Endurance Group: Not Supported 00:12:38.627 Delete NVM Set: Not Supported 00:12:38.627 Extended LBA Formats Supported: Not Supported 00:12:38.627 Flexible Data Placement Supported: Not Supported 00:12:38.627 00:12:38.627 Controller Memory Buffer Support 00:12:38.627 ================================ 00:12:38.627 Supported: No 00:12:38.627 00:12:38.627 Persistent Memory Region Support 00:12:38.627 ================================ 00:12:38.627 Supported: No 00:12:38.627 00:12:38.627 Admin Command Set Attributes 00:12:38.627 ============================ 00:12:38.627 Security Send/Receive: Not Supported 00:12:38.627 Format NVM: Not Supported 00:12:38.627 Firmware Activate/Download: Not Supported 00:12:38.627 Namespace Management: Not Supported 00:12:38.627 Device Self-Test: Not Supported 00:12:38.627 Directives: Not Supported 00:12:38.627 NVMe-MI: Not Supported 00:12:38.627 Virtualization Management: Not Supported 00:12:38.627 Doorbell Buffer Config: Not Supported 00:12:38.627 Get LBA Status Capability: Not Supported 00:12:38.627 Command & Feature Lockdown Capability: Not Supported 00:12:38.627 Abort Command Limit: 4 00:12:38.627 Async Event Request Limit: 4 00:12:38.627 Number of Firmware Slots: N/A 00:12:38.627 Firmware Slot 1 Read-Only: N/A 00:12:38.627 Firmware Activation Without Reset: N/A 00:12:38.627 Multiple Update Detection Support: N/A 00:12:38.627 Firmware Update Granularity: No Information Provided 00:12:38.627 Per-Namespace SMART Log: No 00:12:38.627 Asymmetric Namespace Access Log Page: Not Supported 00:12:38.627 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:38.627 Command Effects Log Page: Supported 00:12:38.627 Get Log Page Extended Data: Supported 00:12:38.627 Telemetry Log Pages: Not Supported 00:12:38.627 Persistent Event Log Pages: Not Supported 00:12:38.627 Supported Log Pages Log Page: May Support 00:12:38.627 Commands Supported & Effects Log Page: Not Supported 00:12:38.627 Feature Identifiers & Effects Log Page:May Support 00:12:38.627 NVMe-MI Commands & Effects Log Page: May Support 00:12:38.627 Data Area 4 for Telemetry Log: Not Supported 00:12:38.627 Error Log Page Entries Supported: 128 00:12:38.627 Keep Alive: Supported 00:12:38.627 Keep Alive Granularity: 10000 ms 00:12:38.627 00:12:38.627 NVM Command Set Attributes 00:12:38.627 ========================== 00:12:38.627 Submission Queue Entry Size 00:12:38.627 Max: 64 00:12:38.627 Min: 64 00:12:38.627 Completion Queue Entry Size 00:12:38.627 Max: 16 00:12:38.627 Min: 16 00:12:38.627 Number of Namespaces: 32 00:12:38.627 Compare Command: Supported 00:12:38.627 Write Uncorrectable Command: Not Supported 00:12:38.627 Dataset Management Command: Supported 00:12:38.627 Write Zeroes Command: Supported 00:12:38.627 Set Features Save Field: Not Supported 00:12:38.627 Reservations: Not Supported 00:12:38.627 Timestamp: Not Supported 00:12:38.627 Copy: Supported 00:12:38.627 Volatile Write Cache: Present 00:12:38.627 Atomic Write Unit (Normal): 1 00:12:38.627 Atomic Write Unit (PFail): 1 00:12:38.627 Atomic Compare & Write Unit: 1 00:12:38.627 Fused Compare & Write: Supported 00:12:38.627 Scatter-Gather List 00:12:38.627 SGL Command Set: Supported (Dword aligned) 00:12:38.627 SGL Keyed: Not Supported 00:12:38.627 SGL Bit Bucket Descriptor: Not Supported 00:12:38.627 SGL Metadata Pointer: Not Supported 00:12:38.627 Oversized SGL: Not Supported 00:12:38.627 SGL Metadata Address: Not Supported 00:12:38.627 SGL Offset: Not Supported 00:12:38.627 Transport SGL Data Block: Not Supported 00:12:38.627 Replay Protected Memory Block: Not Supported 00:12:38.627 00:12:38.627 Firmware Slot Information 00:12:38.627 ========================= 00:12:38.627 Active slot: 1 00:12:38.627 Slot 1 Firmware Revision: 24.09 00:12:38.627 00:12:38.627 00:12:38.627 Commands Supported and Effects 00:12:38.627 ============================== 00:12:38.627 Admin Commands 00:12:38.627 -------------- 00:12:38.627 Get Log Page (02h): Supported 00:12:38.627 Identify (06h): Supported 00:12:38.627 Abort (08h): Supported 00:12:38.627 Set Features (09h): Supported 00:12:38.627 Get Features (0Ah): Supported 00:12:38.627 Asynchronous Event Request (0Ch): Supported 00:12:38.627 Keep Alive (18h): Supported 00:12:38.627 I/O Commands 00:12:38.627 ------------ 00:12:38.627 Flush (00h): Supported LBA-Change 00:12:38.627 Write (01h): Supported LBA-Change 00:12:38.627 Read (02h): Supported 00:12:38.627 Compare (05h): Supported 00:12:38.627 Write Zeroes (08h): Supported LBA-Change 00:12:38.627 Dataset Management (09h): Supported LBA-Change 00:12:38.627 Copy (19h): Supported LBA-Change 00:12:38.627 Unknown (79h): Supported LBA-Change 00:12:38.627 Unknown (7Ah): Supported 00:12:38.627 00:12:38.627 Error Log 00:12:38.627 ========= 00:12:38.627 00:12:38.627 Arbitration 00:12:38.627 =========== 00:12:38.627 Arbitration Burst: 1 00:12:38.627 00:12:38.627 Power Management 00:12:38.627 ================ 00:12:38.627 Number of Power States: 1 00:12:38.627 Current Power State: Power State #0 00:12:38.627 Power State #0: 00:12:38.627 Max Power: 0.00 W 00:12:38.627 Non-Operational State: Operational 00:12:38.627 Entry Latency: Not Reported 00:12:38.627 Exit Latency: Not Reported 00:12:38.627 Relative Read Throughput: 0 00:12:38.627 Relative Read Latency: 0 00:12:38.627 Relative Write Throughput: 0 00:12:38.627 Relative Write Latency: 0 00:12:38.627 Idle Power: Not Reported 00:12:38.627 Active Power: Not Reported 00:12:38.627 Non-Operational Permissive Mode: Not Supported 00:12:38.627 00:12:38.627 Health Information 00:12:38.627 ================== 00:12:38.627 Critical Warnings: 00:12:38.627 Available Spare Space: OK 00:12:38.627 Temperature: OK 00:12:38.627 Device Reliability: OK 00:12:38.627 Read Only: No 00:12:38.627 Volatile Memory Backup: OK 00:12:38.627 Current Temperature: 0 Kelvin (-2[2024-06-10 12:16:43.967340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:38.627 [2024-06-10 12:16:43.975200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:38.627 [2024-06-10 12:16:43.975227] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:38.627 [2024-06-10 12:16:43.975236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.627 [2024-06-10 12:16:43.975243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.627 [2024-06-10 12:16:43.975249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.627 [2024-06-10 12:16:43.975255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.627 [2024-06-10 12:16:43.975306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:38.627 [2024-06-10 12:16:43.975317] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:38.627 [2024-06-10 12:16:43.976309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.627 [2024-06-10 12:16:43.976357] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:38.627 [2024-06-10 12:16:43.976364] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:38.627 [2024-06-10 12:16:43.977320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:38.627 [2024-06-10 12:16:43.977332] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:38.627 [2024-06-10 12:16:43.977380] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:38.627 [2024-06-10 12:16:43.978758] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:38.627 73 Celsius) 00:12:38.627 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:38.627 Available Spare: 0% 00:12:38.627 Available Spare Threshold: 0% 00:12:38.627 Life Percentage Used: 0% 00:12:38.627 Data Units Read: 0 00:12:38.627 Data Units Written: 0 00:12:38.627 Host Read Commands: 0 00:12:38.627 Host Write Commands: 0 00:12:38.627 Controller Busy Time: 0 minutes 00:12:38.627 Power Cycles: 0 00:12:38.627 Power On Hours: 0 hours 00:12:38.627 Unsafe Shutdowns: 0 00:12:38.627 Unrecoverable Media Errors: 0 00:12:38.627 Lifetime Error Log Entries: 0 00:12:38.627 Warning Temperature Time: 0 minutes 00:12:38.627 Critical Temperature Time: 0 minutes 00:12:38.627 00:12:38.627 Number of Queues 00:12:38.627 ================ 00:12:38.627 Number of I/O Submission Queues: 127 00:12:38.627 Number of I/O Completion Queues: 127 00:12:38.627 00:12:38.627 Active Namespaces 00:12:38.627 ================= 00:12:38.627 Namespace ID:1 00:12:38.627 Error Recovery Timeout: Unlimited 00:12:38.627 Command Set Identifier: NVM (00h) 00:12:38.627 Deallocate: Supported 00:12:38.627 Deallocated/Unwritten Error: Not Supported 00:12:38.628 Deallocated Read Value: Unknown 00:12:38.628 Deallocate in Write Zeroes: Not Supported 00:12:38.628 Deallocated Guard Field: 0xFFFF 00:12:38.628 Flush: Supported 00:12:38.628 Reservation: Supported 00:12:38.628 Namespace Sharing Capabilities: Multiple Controllers 00:12:38.628 Size (in LBAs): 131072 (0GiB) 00:12:38.628 Capacity (in LBAs): 131072 (0GiB) 00:12:38.628 Utilization (in LBAs): 131072 (0GiB) 00:12:38.628 NGUID: 4832B3AA3DB94FAE9C61A81B7221D514 00:12:38.628 UUID: 4832b3aa-3db9-4fae-9c61-a81b7221d514 00:12:38.628 Thin Provisioning: Not Supported 00:12:38.628 Per-NS Atomic Units: Yes 00:12:38.628 Atomic Boundary Size (Normal): 0 00:12:38.628 Atomic Boundary Size (PFail): 0 00:12:38.628 Atomic Boundary Offset: 0 00:12:38.628 Maximum Single Source Range Length: 65535 00:12:38.628 Maximum Copy Length: 65535 00:12:38.628 Maximum Source Range Count: 1 00:12:38.628 NGUID/EUI64 Never Reused: No 00:12:38.628 Namespace Write Protected: No 00:12:38.628 Number of LBA Formats: 1 00:12:38.628 Current LBA Format: LBA Format #00 00:12:38.628 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:38.628 00:12:38.628 12:16:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:38.628 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.628 [2024-06-10 12:16:44.166244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.918 Initializing NVMe Controllers 00:12:43.918 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:43.918 Initialization complete. Launching workers. 00:12:43.918 ======================================================== 00:12:43.918 Latency(us) 00:12:43.918 Device Information : IOPS MiB/s Average min max 00:12:43.918 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.71 156.05 3203.86 831.52 6854.16 00:12:43.918 ======================================================== 00:12:43.918 Total : 39947.71 156.05 3203.86 831.52 6854.16 00:12:43.918 00:12:43.918 [2024-06-10 12:16:49.270376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.918 12:16:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:43.918 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.918 [2024-06-10 12:16:49.445931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.202 Initializing NVMe Controllers 00:12:49.202 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:49.202 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:49.202 Initialization complete. Launching workers. 00:12:49.202 ======================================================== 00:12:49.202 Latency(us) 00:12:49.202 Device Information : IOPS MiB/s Average min max 00:12:49.202 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35977.56 140.54 3557.66 1099.80 6847.79 00:12:49.203 ======================================================== 00:12:49.203 Total : 35977.56 140.54 3557.66 1099.80 6847.79 00:12:49.203 00:12:49.203 [2024-06-10 12:16:54.466444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.203 12:16:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:49.203 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.203 [2024-06-10 12:16:54.665632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.489 [2024-06-10 12:16:59.809281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.489 Initializing NVMe Controllers 00:12:54.489 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.489 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:54.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:54.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:54.489 Initialization complete. Launching workers. 00:12:54.489 Starting thread on core 2 00:12:54.489 Starting thread on core 3 00:12:54.489 Starting thread on core 1 00:12:54.489 12:16:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:54.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.489 [2024-06-10 12:17:00.077680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.784 [2024-06-10 12:17:03.161809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.784 Initializing NVMe Controllers 00:12:57.784 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.784 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.784 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:57.784 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:57.784 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:57.784 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:57.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:57.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:57.784 Initialization complete. Launching workers. 00:12:57.784 Starting thread on core 1 with urgent priority queue 00:12:57.784 Starting thread on core 2 with urgent priority queue 00:12:57.784 Starting thread on core 3 with urgent priority queue 00:12:57.784 Starting thread on core 0 with urgent priority queue 00:12:57.784 SPDK bdev Controller (SPDK2 ) core 0: 11877.00 IO/s 8.42 secs/100000 ios 00:12:57.784 SPDK bdev Controller (SPDK2 ) core 1: 9173.33 IO/s 10.90 secs/100000 ios 00:12:57.784 SPDK bdev Controller (SPDK2 ) core 2: 8173.67 IO/s 12.23 secs/100000 ios 00:12:57.784 SPDK bdev Controller (SPDK2 ) core 3: 8183.00 IO/s 12.22 secs/100000 ios 00:12:57.784 ======================================================== 00:12:57.784 00:12:57.784 12:17:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:57.784 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.044 [2024-06-10 12:17:03.432294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.044 Initializing NVMe Controllers 00:12:58.044 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.044 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.044 Namespace ID: 1 size: 0GB 00:12:58.044 Initialization complete. 00:12:58.044 INFO: using host memory buffer for IO 00:12:58.044 Hello world! 00:12:58.044 [2024-06-10 12:17:03.445386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.044 12:17:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:58.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.305 [2024-06-10 12:17:03.715185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.251 Initializing NVMe Controllers 00:12:59.251 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.251 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.251 Initialization complete. Launching workers. 00:12:59.251 submit (in ns) avg, min, max = 6871.2, 3928.3, 4003230.8 00:12:59.251 complete (in ns) avg, min, max = 17935.1, 2400.0, 5993654.2 00:12:59.251 00:12:59.251 Submit histogram 00:12:59.251 ================ 00:12:59.251 Range in us Cumulative Count 00:12:59.251 3.920 - 3.947: 0.5000% ( 97) 00:12:59.251 3.947 - 3.973: 5.0464% ( 882) 00:12:59.251 3.973 - 4.000: 12.9742% ( 1538) 00:12:59.251 4.000 - 4.027: 22.8299% ( 1912) 00:12:59.251 4.027 - 4.053: 34.0928% ( 2185) 00:12:59.251 4.053 - 4.080: 47.8660% ( 2672) 00:12:59.251 4.080 - 4.107: 64.3711% ( 3202) 00:12:59.251 4.107 - 4.133: 80.5928% ( 3147) 00:12:59.251 4.133 - 4.160: 91.7887% ( 2172) 00:12:59.251 4.160 - 4.187: 96.6186% ( 937) 00:12:59.251 4.187 - 4.213: 98.5773% ( 380) 00:12:59.251 4.213 - 4.240: 99.1959% ( 120) 00:12:59.251 4.240 - 4.267: 99.3557% ( 31) 00:12:59.251 4.267 - 4.293: 99.3918% ( 7) 00:12:59.251 4.293 - 4.320: 99.4227% ( 6) 00:12:59.251 4.320 - 4.347: 99.4433% ( 4) 00:12:59.251 4.347 - 4.373: 99.4588% ( 3) 00:12:59.251 4.373 - 4.400: 99.4691% ( 2) 00:12:59.251 4.400 - 4.427: 99.4897% ( 4) 00:12:59.251 4.507 - 4.533: 99.4948% ( 1) 00:12:59.251 4.560 - 4.587: 99.5052% ( 2) 00:12:59.251 4.640 - 4.667: 99.5103% ( 1) 00:12:59.251 4.827 - 4.853: 99.5155% ( 1) 00:12:59.251 4.853 - 4.880: 99.5206% ( 1) 00:12:59.251 4.880 - 4.907: 99.5258% ( 1) 00:12:59.251 4.987 - 5.013: 99.5309% ( 1) 00:12:59.251 5.040 - 5.067: 99.5361% ( 1) 00:12:59.251 5.093 - 5.120: 99.5412% ( 1) 00:12:59.251 5.280 - 5.307: 99.5464% ( 1) 00:12:59.251 5.600 - 5.627: 99.5515% ( 1) 00:12:59.251 5.760 - 5.787: 99.5567% ( 1) 00:12:59.251 5.787 - 5.813: 99.5619% ( 1) 00:12:59.251 5.813 - 5.840: 99.5670% ( 1) 00:12:59.251 5.920 - 5.947: 99.5722% ( 1) 00:12:59.251 6.053 - 6.080: 99.5773% ( 1) 00:12:59.251 6.107 - 6.133: 99.5825% ( 1) 00:12:59.251 6.133 - 6.160: 99.5876% ( 1) 00:12:59.251 6.213 - 6.240: 99.5928% ( 1) 00:12:59.251 6.267 - 6.293: 99.5979% ( 1) 00:12:59.251 6.320 - 6.347: 99.6031% ( 1) 00:12:59.251 6.347 - 6.373: 99.6134% ( 2) 00:12:59.251 6.373 - 6.400: 99.6186% ( 1) 00:12:59.251 6.453 - 6.480: 99.6237% ( 1) 00:12:59.251 6.480 - 6.507: 99.6340% ( 2) 00:12:59.251 6.533 - 6.560: 99.6392% ( 1) 00:12:59.251 6.587 - 6.613: 99.6495% ( 2) 00:12:59.251 6.640 - 6.667: 99.6546% ( 1) 00:12:59.251 6.667 - 6.693: 99.6701% ( 3) 00:12:59.251 6.720 - 6.747: 99.6753% ( 1) 00:12:59.251 6.747 - 6.773: 99.6804% ( 1) 00:12:59.251 6.800 - 6.827: 99.6856% ( 1) 00:12:59.251 6.827 - 6.880: 99.6959% ( 2) 00:12:59.251 6.880 - 6.933: 99.7113% ( 3) 00:12:59.251 6.933 - 6.987: 99.7165% ( 1) 00:12:59.251 6.987 - 7.040: 99.7268% ( 2) 00:12:59.251 7.040 - 7.093: 99.7320% ( 1) 00:12:59.251 7.147 - 7.200: 99.7371% ( 1) 00:12:59.251 7.200 - 7.253: 99.7526% ( 3) 00:12:59.251 7.253 - 7.307: 99.7835% ( 6) 00:12:59.252 7.307 - 7.360: 99.7990% ( 3) 00:12:59.252 7.360 - 7.413: 99.8041% ( 1) 00:12:59.252 7.413 - 7.467: 99.8144% ( 2) 00:12:59.252 7.467 - 7.520: 99.8196% ( 1) 00:12:59.252 7.520 - 7.573: 99.8247% ( 1) 00:12:59.252 7.573 - 7.627: 99.8351% ( 2) 00:12:59.252 7.627 - 7.680: 99.8402% ( 1) 00:12:59.252 7.680 - 7.733: 99.8505% ( 2) 00:12:59.252 7.733 - 7.787: 99.8557% ( 1) 00:12:59.252 7.787 - 7.840: 99.8711% ( 3) 00:12:59.252 7.893 - 7.947: 99.8763% ( 1) 00:12:59.252 7.947 - 8.000: 99.8814% ( 1) 00:12:59.252 8.000 - 8.053: 99.8866% ( 1) 00:12:59.252 8.107 - 8.160: 99.8918% ( 1) 00:12:59.252 8.213 - 8.267: 99.9021% ( 2) 00:12:59.252 8.587 - 8.640: 99.9072% ( 1) 00:12:59.252 8.640 - 8.693: 99.9124% ( 1) 00:12:59.252 10.560 - 10.613: 99.9175% ( 1) 00:12:59.252 14.507 - 14.613: 99.9227% ( 1) 00:12:59.252 14.720 - 14.827: 99.9278% ( 1) 00:12:59.252 [2024-06-10 12:17:04.811921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.514 2007.040 - 2020.693: 99.9330% ( 1) 00:12:59.514 3986.773 - 4014.080: 100.0000% ( 13) 00:12:59.514 00:12:59.514 Complete histogram 00:12:59.514 ================== 00:12:59.514 Range in us Cumulative Count 00:12:59.514 2.400 - 2.413: 0.5825% ( 113) 00:12:59.514 2.413 - 2.427: 1.0619% ( 93) 00:12:59.514 2.427 - 2.440: 1.1649% ( 20) 00:12:59.514 2.440 - 2.453: 55.0567% ( 10455) 00:12:59.514 2.453 - 2.467: 62.7113% ( 1485) 00:12:59.514 2.467 - 2.480: 73.3505% ( 2064) 00:12:59.514 2.480 - 2.493: 78.4948% ( 998) 00:12:59.514 2.493 - 2.507: 81.6856% ( 619) 00:12:59.514 2.507 - 2.520: 85.4639% ( 733) 00:12:59.514 2.520 - 2.533: 91.5773% ( 1186) 00:12:59.514 2.533 - 2.547: 95.6443% ( 789) 00:12:59.514 2.547 - 2.560: 97.3093% ( 323) 00:12:59.514 2.560 - 2.573: 98.3969% ( 211) 00:12:59.514 2.573 - 2.587: 99.0464% ( 126) 00:12:59.514 2.587 - 2.600: 99.1907% ( 28) 00:12:59.514 2.600 - 2.613: 99.2629% ( 14) 00:12:59.514 2.613 - 2.627: 99.2680% ( 1) 00:12:59.514 2.627 - 2.640: 99.2732% ( 1) 00:12:59.514 2.653 - 2.667: 99.2784% ( 1) 00:12:59.514 2.667 - 2.680: 99.2887% ( 2) 00:12:59.514 2.720 - 2.733: 99.2938% ( 1) 00:12:59.514 3.000 - 3.013: 99.2990% ( 1) 00:12:59.514 3.080 - 3.093: 99.3041% ( 1) 00:12:59.514 3.187 - 3.200: 99.3093% ( 1) 00:12:59.514 4.587 - 4.613: 99.3196% ( 2) 00:12:59.514 4.667 - 4.693: 99.3247% ( 1) 00:12:59.514 4.773 - 4.800: 99.3299% ( 1) 00:12:59.514 4.800 - 4.827: 99.3351% ( 1) 00:12:59.514 4.880 - 4.907: 99.3402% ( 1) 00:12:59.514 4.933 - 4.960: 99.3505% ( 2) 00:12:59.514 4.960 - 4.987: 99.3557% ( 1) 00:12:59.514 5.040 - 5.067: 99.3608% ( 1) 00:12:59.514 5.067 - 5.093: 99.3660% ( 1) 00:12:59.514 5.120 - 5.147: 99.3814% ( 3) 00:12:59.514 5.200 - 5.227: 99.3866% ( 1) 00:12:59.514 5.227 - 5.253: 99.3918% ( 1) 00:12:59.514 5.253 - 5.280: 99.4021% ( 2) 00:12:59.514 5.307 - 5.333: 99.4072% ( 1) 00:12:59.514 5.333 - 5.360: 99.4124% ( 1) 00:12:59.514 5.360 - 5.387: 99.4227% ( 2) 00:12:59.514 5.387 - 5.413: 99.4330% ( 2) 00:12:59.514 5.413 - 5.440: 99.4381% ( 1) 00:12:59.514 5.440 - 5.467: 99.4433% ( 1) 00:12:59.514 5.493 - 5.520: 99.4485% ( 1) 00:12:59.514 5.520 - 5.547: 99.4536% ( 1) 00:12:59.514 5.547 - 5.573: 99.4588% ( 1) 00:12:59.514 5.573 - 5.600: 99.4639% ( 1) 00:12:59.514 5.600 - 5.627: 99.4691% ( 1) 00:12:59.514 5.653 - 5.680: 99.4742% ( 1) 00:12:59.514 5.680 - 5.707: 99.4845% ( 2) 00:12:59.514 5.733 - 5.760: 99.4897% ( 1) 00:12:59.514 5.760 - 5.787: 99.5000% ( 2) 00:12:59.514 5.787 - 5.813: 99.5155% ( 3) 00:12:59.514 5.867 - 5.893: 99.5309% ( 3) 00:12:59.514 5.947 - 5.973: 99.5361% ( 1) 00:12:59.514 5.973 - 6.000: 99.5412% ( 1) 00:12:59.514 6.053 - 6.080: 99.5464% ( 1) 00:12:59.514 6.107 - 6.133: 99.5515% ( 1) 00:12:59.514 6.133 - 6.160: 99.5619% ( 2) 00:12:59.514 6.293 - 6.320: 99.5670% ( 1) 00:12:59.514 6.320 - 6.347: 99.5722% ( 1) 00:12:59.514 6.613 - 6.640: 99.5773% ( 1) 00:12:59.514 6.773 - 6.800: 99.5876% ( 2) 00:12:59.514 11.360 - 11.413: 99.5928% ( 1) 00:12:59.514 11.520 - 11.573: 99.5979% ( 1) 00:12:59.514 12.533 - 12.587: 99.6031% ( 1) 00:12:59.514 13.280 - 13.333: 99.6082% ( 1) 00:12:59.514 42.453 - 42.667: 99.6134% ( 1) 00:12:59.514 150.187 - 151.040: 99.6186% ( 1) 00:12:59.514 2007.040 - 2020.693: 99.6289% ( 2) 00:12:59.515 3986.773 - 4014.080: 99.9794% ( 68) 00:12:59.515 5980.160 - 6007.467: 100.0000% ( 4) 00:12:59.515 00:12:59.515 12:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:59.515 12:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:59.515 12:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:59.515 12:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:59.515 12:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:59.515 [ 00:12:59.515 { 00:12:59.515 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.515 "subtype": "Discovery", 00:12:59.515 "listen_addresses": [], 00:12:59.515 "allow_any_host": true, 00:12:59.515 "hosts": [] 00:12:59.515 }, 00:12:59.515 { 00:12:59.515 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:59.515 "subtype": "NVMe", 00:12:59.515 "listen_addresses": [ 00:12:59.515 { 00:12:59.515 "trtype": "VFIOUSER", 00:12:59.515 "adrfam": "IPv4", 00:12:59.515 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:59.515 "trsvcid": "0" 00:12:59.515 } 00:12:59.515 ], 00:12:59.515 "allow_any_host": true, 00:12:59.515 "hosts": [], 00:12:59.515 "serial_number": "SPDK1", 00:12:59.515 "model_number": "SPDK bdev Controller", 00:12:59.515 "max_namespaces": 32, 00:12:59.515 "min_cntlid": 1, 00:12:59.515 "max_cntlid": 65519, 00:12:59.515 "namespaces": [ 00:12:59.515 { 00:12:59.515 "nsid": 1, 00:12:59.515 "bdev_name": "Malloc1", 00:12:59.515 "name": "Malloc1", 00:12:59.515 "nguid": "2BC099E587304406B3A37B46142AA374", 00:12:59.515 "uuid": "2bc099e5-8730-4406-b3a3-7b46142aa374" 00:12:59.515 }, 00:12:59.515 { 00:12:59.515 "nsid": 2, 00:12:59.515 "bdev_name": "Malloc3", 00:12:59.515 "name": "Malloc3", 00:12:59.515 "nguid": "58C45BF0C8DE45CBA827129E65804CF7", 00:12:59.515 "uuid": "58c45bf0-c8de-45cb-a827-129e65804cf7" 00:12:59.515 } 00:12:59.515 ] 00:12:59.515 }, 00:12:59.515 { 00:12:59.515 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:59.515 "subtype": "NVMe", 00:12:59.515 "listen_addresses": [ 00:12:59.515 { 00:12:59.515 "trtype": "VFIOUSER", 00:12:59.515 "adrfam": "IPv4", 00:12:59.515 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:59.515 "trsvcid": "0" 00:12:59.515 } 00:12:59.515 ], 00:12:59.515 "allow_any_host": true, 00:12:59.515 "hosts": [], 00:12:59.515 "serial_number": "SPDK2", 00:12:59.515 "model_number": "SPDK bdev Controller", 00:12:59.515 "max_namespaces": 32, 00:12:59.515 "min_cntlid": 1, 00:12:59.515 "max_cntlid": 65519, 00:12:59.515 "namespaces": [ 00:12:59.515 { 00:12:59.515 "nsid": 1, 00:12:59.515 "bdev_name": "Malloc2", 00:12:59.515 "name": "Malloc2", 00:12:59.515 "nguid": "4832B3AA3DB94FAE9C61A81B7221D514", 00:12:59.515 "uuid": "4832b3aa-3db9-4fae-9c61-a81b7221d514" 00:12:59.515 } 00:12:59.515 ] 00:12:59.515 } 00:12:59.515 ] 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=560356 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:59.515 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:59.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.775 Malloc4 00:12:59.775 [2024-06-10 12:17:05.198626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.775 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:59.775 [2024-06-10 12:17:05.366696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.036 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:00.036 Asynchronous Event Request test 00:13:00.036 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.036 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.036 Registering asynchronous event callbacks... 00:13:00.036 Starting namespace attribute notice tests for all controllers... 00:13:00.036 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:00.036 aer_cb - Changed Namespace 00:13:00.036 Cleaning up... 00:13:00.036 [ 00:13:00.036 { 00:13:00.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:00.036 "subtype": "Discovery", 00:13:00.036 "listen_addresses": [], 00:13:00.036 "allow_any_host": true, 00:13:00.036 "hosts": [] 00:13:00.036 }, 00:13:00.036 { 00:13:00.036 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:00.036 "subtype": "NVMe", 00:13:00.036 "listen_addresses": [ 00:13:00.036 { 00:13:00.036 "trtype": "VFIOUSER", 00:13:00.036 "adrfam": "IPv4", 00:13:00.036 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:00.036 "trsvcid": "0" 00:13:00.036 } 00:13:00.036 ], 00:13:00.036 "allow_any_host": true, 00:13:00.036 "hosts": [], 00:13:00.036 "serial_number": "SPDK1", 00:13:00.036 "model_number": "SPDK bdev Controller", 00:13:00.036 "max_namespaces": 32, 00:13:00.036 "min_cntlid": 1, 00:13:00.036 "max_cntlid": 65519, 00:13:00.036 "namespaces": [ 00:13:00.036 { 00:13:00.036 "nsid": 1, 00:13:00.036 "bdev_name": "Malloc1", 00:13:00.036 "name": "Malloc1", 00:13:00.036 "nguid": "2BC099E587304406B3A37B46142AA374", 00:13:00.036 "uuid": "2bc099e5-8730-4406-b3a3-7b46142aa374" 00:13:00.036 }, 00:13:00.036 { 00:13:00.036 "nsid": 2, 00:13:00.036 "bdev_name": "Malloc3", 00:13:00.036 "name": "Malloc3", 00:13:00.036 "nguid": "58C45BF0C8DE45CBA827129E65804CF7", 00:13:00.036 "uuid": "58c45bf0-c8de-45cb-a827-129e65804cf7" 00:13:00.036 } 00:13:00.036 ] 00:13:00.036 }, 00:13:00.036 { 00:13:00.036 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:00.036 "subtype": "NVMe", 00:13:00.036 "listen_addresses": [ 00:13:00.036 { 00:13:00.036 "trtype": "VFIOUSER", 00:13:00.036 "adrfam": "IPv4", 00:13:00.036 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:00.036 "trsvcid": "0" 00:13:00.036 } 00:13:00.036 ], 00:13:00.036 "allow_any_host": true, 00:13:00.036 "hosts": [], 00:13:00.036 "serial_number": "SPDK2", 00:13:00.036 "model_number": "SPDK bdev Controller", 00:13:00.036 "max_namespaces": 32, 00:13:00.036 "min_cntlid": 1, 00:13:00.036 "max_cntlid": 65519, 00:13:00.036 "namespaces": [ 00:13:00.036 { 00:13:00.036 "nsid": 1, 00:13:00.036 "bdev_name": "Malloc2", 00:13:00.036 "name": "Malloc2", 00:13:00.036 "nguid": "4832B3AA3DB94FAE9C61A81B7221D514", 00:13:00.036 "uuid": "4832b3aa-3db9-4fae-9c61-a81b7221d514" 00:13:00.036 }, 00:13:00.036 { 00:13:00.036 "nsid": 2, 00:13:00.037 "bdev_name": "Malloc4", 00:13:00.037 "name": "Malloc4", 00:13:00.037 "nguid": "50FA3D94500B4DB388BECCA37075A616", 00:13:00.037 "uuid": "50fa3d94-500b-4db3-88be-cca37075a616" 00:13:00.037 } 00:13:00.037 ] 00:13:00.037 } 00:13:00.037 ] 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 560356 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 551267 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 551267 ']' 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 551267 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 551267 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 551267' 00:13:00.037 killing process with pid 551267 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 551267 00:13:00.037 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 551267 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=560402 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 560402' 00:13:00.298 Process pid: 560402 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 560402 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 560402 ']' 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:00.298 12:17:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:00.298 [2024-06-10 12:17:05.843203] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:00.298 [2024-06-10 12:17:05.844134] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:00.298 [2024-06-10 12:17:05.844176] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.298 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.560 [2024-06-10 12:17:05.911877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.560 [2024-06-10 12:17:05.978316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.560 [2024-06-10 12:17:05.978355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.560 [2024-06-10 12:17:05.978362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.560 [2024-06-10 12:17:05.978369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.560 [2024-06-10 12:17:05.978374] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.560 [2024-06-10 12:17:05.978511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.560 [2024-06-10 12:17:05.978626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.560 [2024-06-10 12:17:05.978783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.560 [2024-06-10 12:17:05.978784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.560 [2024-06-10 12:17:06.052637] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:00.560 [2024-06-10 12:17:06.052717] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:00.560 [2024-06-10 12:17:06.053721] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:00.560 [2024-06-10 12:17:06.054133] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:00.560 [2024-06-10 12:17:06.054248] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:01.132 12:17:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:01.132 12:17:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:01.132 12:17:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:02.074 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:02.335 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:02.335 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:02.335 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.335 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:02.335 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:02.633 Malloc1 00:13:02.633 12:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:02.633 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:02.895 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:02.895 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.895 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:02.895 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:03.156 Malloc2 00:13:03.156 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:03.415 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:03.415 12:17:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:03.676 12:17:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:03.676 12:17:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 560402 00:13:03.676 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 560402 ']' 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 560402 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 560402 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 560402' 00:13:03.677 killing process with pid 560402 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 560402 00:13:03.677 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 560402 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:03.938 00:13:03.938 real 0m50.681s 00:13:03.938 user 3m20.844s 00:13:03.938 sys 0m3.067s 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:03.938 ************************************ 00:13:03.938 END TEST nvmf_vfio_user 00:13:03.938 ************************************ 00:13:03.938 12:17:09 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:03.938 12:17:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:03.938 12:17:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:03.938 12:17:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.938 ************************************ 00:13:03.938 START TEST nvmf_vfio_user_nvme_compliance 00:13:03.938 ************************************ 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:03.938 * Looking for test storage... 00:13:03.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.938 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=561304 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 561304' 00:13:03.939 Process pid: 561304 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 561304 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 561304 ']' 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:03.939 12:17:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.201 [2024-06-10 12:17:09.592616] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:04.201 [2024-06-10 12:17:09.592682] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.201 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.201 [2024-06-10 12:17:09.666796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.201 [2024-06-10 12:17:09.741491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.201 [2024-06-10 12:17:09.741530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.201 [2024-06-10 12:17:09.741538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.201 [2024-06-10 12:17:09.741544] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.201 [2024-06-10 12:17:09.741550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.201 [2024-06-10 12:17:09.741688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.201 [2024-06-10 12:17:09.741803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.201 [2024-06-10 12:17:09.741805] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.770 12:17:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:04.770 12:17:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:13:04.770 12:17:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 malloc0 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.152 12:17:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:06.152 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.152 00:13:06.152 00:13:06.152 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.152 http://cunit.sourceforge.net/ 00:13:06.152 00:13:06.152 00:13:06.152 Suite: nvme_compliance 00:13:06.152 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 12:17:11.634845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.152 [2024-06-10 12:17:11.636183] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:06.152 [2024-06-10 12:17:11.636198] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:06.152 [2024-06-10 12:17:11.636203] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:06.153 [2024-06-10 12:17:11.637862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.153 passed 00:13:06.153 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 12:17:11.732485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.153 [2024-06-10 12:17:11.735500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.412 passed 00:13:06.412 Test: admin_identify_ns ...[2024-06-10 12:17:11.831437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.412 [2024-06-10 12:17:11.895208] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:06.412 [2024-06-10 12:17:11.902208] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:06.412 [2024-06-10 12:17:11.924322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.412 passed 00:13:06.412 Test: admin_get_features_mandatory_features ...[2024-06-10 12:17:12.015932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.673 [2024-06-10 12:17:12.018948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.673 passed 00:13:06.673 Test: admin_get_features_optional_features ...[2024-06-10 12:17:12.112537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.673 [2024-06-10 12:17:12.115551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.673 passed 00:13:06.673 Test: admin_set_features_number_of_queues ...[2024-06-10 12:17:12.209660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.934 [2024-06-10 12:17:12.314315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.934 passed 00:13:06.934 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 12:17:12.407277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.934 [2024-06-10 12:17:12.410300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.934 passed 00:13:06.934 Test: admin_get_log_page_with_lpo ...[2024-06-10 12:17:12.503420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.195 [2024-06-10 12:17:12.571209] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:07.195 [2024-06-10 12:17:12.587270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.195 passed 00:13:07.195 Test: fabric_property_get ...[2024-06-10 12:17:12.676891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.195 [2024-06-10 12:17:12.678124] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:07.195 [2024-06-10 12:17:12.679907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.195 passed 00:13:07.195 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 12:17:12.774439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.195 [2024-06-10 12:17:12.775704] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:07.195 [2024-06-10 12:17:12.777466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.455 passed 00:13:07.456 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 12:17:12.866623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.456 [2024-06-10 12:17:12.950205] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.456 [2024-06-10 12:17:12.966200] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.456 [2024-06-10 12:17:12.971289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.456 passed 00:13:07.716 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 12:17:13.063918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.716 [2024-06-10 12:17:13.065144] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:07.716 [2024-06-10 12:17:13.066938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.716 passed 00:13:07.716 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 12:17:13.160477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.716 [2024-06-10 12:17:13.236203] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:07.716 [2024-06-10 12:17:13.260203] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.716 [2024-06-10 12:17:13.265286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.716 passed 00:13:07.977 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 12:17:13.356872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.977 [2024-06-10 12:17:13.358102] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:07.977 [2024-06-10 12:17:13.358123] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:07.977 [2024-06-10 12:17:13.359890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.977 passed 00:13:07.977 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 12:17:13.450965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.977 [2024-06-10 12:17:13.546200] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:07.977 [2024-06-10 12:17:13.554206] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:07.977 [2024-06-10 12:17:13.562240] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:07.977 [2024-06-10 12:17:13.570204] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:08.238 [2024-06-10 12:17:13.599287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.238 passed 00:13:08.238 Test: admin_create_io_sq_verify_pc ...[2024-06-10 12:17:13.690882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.238 [2024-06-10 12:17:13.707210] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:08.238 [2024-06-10 12:17:13.725029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.238 passed 00:13:08.238 Test: admin_create_io_qp_max_qps ...[2024-06-10 12:17:13.818585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.622 [2024-06-10 12:17:14.919206] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:09.883 [2024-06-10 12:17:15.302506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.883 passed 00:13:09.883 Test: admin_create_io_sq_shared_cq ...[2024-06-10 12:17:15.396432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.143 [2024-06-10 12:17:15.528212] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:10.143 [2024-06-10 12:17:15.565262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.143 passed 00:13:10.143 00:13:10.143 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.143 suites 1 1 n/a 0 0 00:13:10.143 tests 18 18 18 0 0 00:13:10.143 asserts 360 360 360 0 n/a 00:13:10.143 00:13:10.143 Elapsed time = 1.651 seconds 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 561304 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 561304 ']' 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 561304 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 561304 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 561304' 00:13:10.143 killing process with pid 561304 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 561304 00:13:10.143 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 561304 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:10.404 00:13:10.404 real 0m6.418s 00:13:10.404 user 0m18.320s 00:13:10.404 sys 0m0.486s 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.404 ************************************ 00:13:10.404 END TEST nvmf_vfio_user_nvme_compliance 00:13:10.404 ************************************ 00:13:10.404 12:17:15 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:10.404 12:17:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:10.404 12:17:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:10.404 12:17:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.404 ************************************ 00:13:10.404 START TEST nvmf_vfio_user_fuzz 00:13:10.404 ************************************ 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:10.404 * Looking for test storage... 00:13:10.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.404 12:17:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.404 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=562534 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 562534' 00:13:10.666 Process pid: 562534 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 562534 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 562534 ']' 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:10.666 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.239 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:11.239 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:13:11.239 12:17:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:12.622 malloc0 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:12.622 12:17:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:44.740 Fuzzing completed. Shutting down the fuzz application 00:13:44.740 00:13:44.740 Dumping successful admin opcodes: 00:13:44.740 8, 9, 10, 24, 00:13:44.740 Dumping successful io opcodes: 00:13:44.740 0, 00:13:44.740 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1127869, total successful commands: 4442, random_seed: 685522816 00:13:44.740 NS: 0x200003a1ef00 admin qp, Total commands completed: 141800, total successful commands: 1150, random_seed: 3128247488 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 562534 ']' 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 562534' 00:13:44.740 killing process with pid 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 562534 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:44.740 00:13:44.740 real 0m33.662s 00:13:44.740 user 0m37.776s 00:13:44.740 sys 0m25.756s 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:44.740 12:17:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:44.740 ************************************ 00:13:44.740 END TEST nvmf_vfio_user_fuzz 00:13:44.740 ************************************ 00:13:44.740 12:17:49 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.740 12:17:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:44.740 12:17:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:44.740 12:17:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.740 ************************************ 00:13:44.740 START TEST nvmf_host_management 00:13:44.740 ************************************ 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.740 * Looking for test storage... 00:13:44.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.740 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.741 12:17:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:52.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:52.918 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:52.918 Found net devices under 0000:31:00.0: cvl_0_0 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:52.918 Found net devices under 0000:31:00.1: cvl_0_1 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.918 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:13:52.919 00:13:52.919 --- 10.0.0.2 ping statistics --- 00:13:52.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.919 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:13:52.919 00:13:52.919 --- 10.0.0.1 ping statistics --- 00:13:52.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.919 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=573498 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 573498 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 573498 ']' 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:52.919 12:17:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:52.919 [2024-06-10 12:17:58.019042] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:52.919 [2024-06-10 12:17:58.019116] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.919 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.919 [2024-06-10 12:17:58.117624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.919 [2024-06-10 12:17:58.213550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.919 [2024-06-10 12:17:58.213604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.919 [2024-06-10 12:17:58.213613] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.919 [2024-06-10 12:17:58.213620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.919 [2024-06-10 12:17:58.213626] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.919 [2024-06-10 12:17:58.213756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.919 [2024-06-10 12:17:58.213916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.919 [2024-06-10 12:17:58.214083] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.919 [2024-06-10 12:17:58.214084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.491 [2024-06-10 12:17:58.833593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:53.491 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 Malloc0 00:13:53.492 [2024-06-10 12:17:58.892550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=573560 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 573560 /var/tmp/bdevperf.sock 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 573560 ']' 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.492 { 00:13:53.492 "params": { 00:13:53.492 "name": "Nvme$subsystem", 00:13:53.492 "trtype": "$TEST_TRANSPORT", 00:13:53.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.492 "adrfam": "ipv4", 00:13:53.492 "trsvcid": "$NVMF_PORT", 00:13:53.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.492 "hdgst": ${hdgst:-false}, 00:13:53.492 "ddgst": ${ddgst:-false} 00:13:53.492 }, 00:13:53.492 "method": "bdev_nvme_attach_controller" 00:13:53.492 } 00:13:53.492 EOF 00:13:53.492 )") 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:53.492 12:17:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.492 "params": { 00:13:53.492 "name": "Nvme0", 00:13:53.492 "trtype": "tcp", 00:13:53.492 "traddr": "10.0.0.2", 00:13:53.492 "adrfam": "ipv4", 00:13:53.492 "trsvcid": "4420", 00:13:53.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:53.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:53.492 "hdgst": false, 00:13:53.492 "ddgst": false 00:13:53.492 }, 00:13:53.492 "method": "bdev_nvme_attach_controller" 00:13:53.492 }' 00:13:53.492 [2024-06-10 12:17:58.991576] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:53.492 [2024-06-10 12:17:58.991624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573560 ] 00:13:53.492 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.492 [2024-06-10 12:17:59.058402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.753 [2024-06-10 12:17:59.123332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.013 Running I/O for 10 seconds... 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.276 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.276 [2024-06-10 12:17:59.831501] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831577] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.276 [2024-06-10 12:17:59.831615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831646] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831671] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831795] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831825] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.831941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be8a0 is same with the state(5) to be set 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.277 12:17:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:54.277 [2024-06-10 12:17:59.850807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.277 [2024-06-10 12:17:59.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.850855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.277 [2024-06-10 12:17:59.850866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.850878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.277 [2024-06-10 12:17:59.850885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.850893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.277 [2024-06-10 12:17:59.850900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.850908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f7130 is same with the state(5) to be set 00:13:54.277 [2024-06-10 12:17:59.850988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.850998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.277 [2024-06-10 12:17:59.851142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.277 [2024-06-10 12:17:59.851151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.278 [2024-06-10 12:17:59.851823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.278 [2024-06-10 12:17:59.851831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.851988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.852011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.852027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.852044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.852060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.279 [2024-06-10 12:17:59.852080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.279 [2024-06-10 12:17:59.852130] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2808280 was disconnected and freed. reset controller. 00:13:54.279 [2024-06-10 12:17:59.853311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:54.279 task offset: 81664 on job bdev=Nvme0n1 fails 00:13:54.279 00:13:54.279 Latency(us) 00:13:54.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:54.279 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:54.279 Verification LBA range: start 0x0 length 0x400 00:13:54.279 Nvme0n1 : 0.43 1499.91 93.74 150.46 0.00 37632.60 1652.05 32768.00 00:13:54.279 =================================================================================================================== 00:13:54.279 Total : 1499.91 93.74 150.46 0.00 37632.60 1652.05 32768.00 00:13:54.279 [2024-06-10 12:17:59.855302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:54.279 [2024-06-10 12:17:59.855322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f7130 (9): Bad file descriptor 00:13:54.541 [2024-06-10 12:17:59.916730] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 573560 00:13:55.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (573560) - No such process 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.485 { 00:13:55.485 "params": { 00:13:55.485 "name": "Nvme$subsystem", 00:13:55.485 "trtype": "$TEST_TRANSPORT", 00:13:55.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.485 "adrfam": "ipv4", 00:13:55.485 "trsvcid": "$NVMF_PORT", 00:13:55.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.485 "hdgst": ${hdgst:-false}, 00:13:55.485 "ddgst": ${ddgst:-false} 00:13:55.485 }, 00:13:55.485 "method": "bdev_nvme_attach_controller" 00:13:55.485 } 00:13:55.485 EOF 00:13:55.485 )") 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:55.485 12:18:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.485 "params": { 00:13:55.485 "name": "Nvme0", 00:13:55.485 "trtype": "tcp", 00:13:55.485 "traddr": "10.0.0.2", 00:13:55.485 "adrfam": "ipv4", 00:13:55.485 "trsvcid": "4420", 00:13:55.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:55.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:55.485 "hdgst": false, 00:13:55.485 "ddgst": false 00:13:55.485 }, 00:13:55.485 "method": "bdev_nvme_attach_controller" 00:13:55.485 }' 00:13:55.485 [2024-06-10 12:18:00.914429] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:55.485 [2024-06-10 12:18:00.914487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574040 ] 00:13:55.485 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.485 [2024-06-10 12:18:00.979237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.485 [2024-06-10 12:18:01.043722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.745 Running I/O for 1 seconds... 00:13:56.686 00:13:56.686 Latency(us) 00:13:56.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.686 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:56.686 Verification LBA range: start 0x0 length 0x400 00:13:56.686 Nvme0n1 : 1.02 1754.86 109.68 0.00 0.00 35808.83 5952.85 31894.19 00:13:56.686 =================================================================================================================== 00:13:56.686 Total : 1754.86 109.68 0.00 0.00 35808.83 5952.85 31894.19 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.947 rmmod nvme_tcp 00:13:56.947 rmmod nvme_fabrics 00:13:56.947 rmmod nvme_keyring 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 573498 ']' 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 573498 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 573498 ']' 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 573498 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 573498 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 573498' 00:13:56.947 killing process with pid 573498 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 573498 00:13:56.947 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 573498 00:13:57.209 [2024-06-10 12:18:02.628295] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.209 12:18:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.123 12:18:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.123 12:18:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:59.123 00:13:59.123 real 0m15.099s 00:13:59.123 user 0m22.666s 00:13:59.123 sys 0m6.983s 00:13:59.123 12:18:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:59.123 12:18:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:59.123 ************************************ 00:13:59.123 END TEST nvmf_host_management 00:13:59.123 ************************************ 00:13:59.385 12:18:04 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:59.385 12:18:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:59.385 12:18:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:59.385 12:18:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.385 ************************************ 00:13:59.385 START TEST nvmf_lvol 00:13:59.385 ************************************ 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:59.385 * Looking for test storage... 00:13:59.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:59.385 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.386 12:18:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:07.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:07.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:07.531 Found net devices under 0000:31:00.0: cvl_0_0 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:07.531 Found net devices under 0000:31:00.1: cvl_0_1 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.531 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.532 12:18:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:14:07.532 00:14:07.532 --- 10.0.0.2 ping statistics --- 00:14:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.532 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:14:07.532 00:14:07.532 --- 10.0.0.1 ping statistics --- 00:14:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.532 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=579604 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 579604 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 579604 ']' 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.532 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.532 [2024-06-10 12:18:13.123780] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:07.532 [2024-06-10 12:18:13.123847] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.794 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.794 [2024-06-10 12:18:13.202779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.794 [2024-06-10 12:18:13.277809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.794 [2024-06-10 12:18:13.277847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.794 [2024-06-10 12:18:13.277855] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.794 [2024-06-10 12:18:13.277861] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.794 [2024-06-10 12:18:13.277867] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.794 [2024-06-10 12:18:13.278003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.794 [2024-06-10 12:18:13.278123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.794 [2024-06-10 12:18:13.278126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.366 12:18:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.628 [2024-06-10 12:18:14.078156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.628 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.890 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:08.890 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.890 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:08.890 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:09.151 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:09.411 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=379e9e18-7432-495f-b99f-ea8ecbaec863 00:14:09.411 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 379e9e18-7432-495f-b99f-ea8ecbaec863 lvol 20 00:14:09.411 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8cb81709-183f-4b98-b609-355fda4a8121 00:14:09.411 12:18:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.670 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8cb81709-183f-4b98-b609-355fda4a8121 00:14:09.930 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.930 [2024-06-10 12:18:15.447361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.930 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.189 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=580170 00:14:10.189 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:10.189 12:18:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:10.189 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.130 12:18:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8cb81709-183f-4b98-b609-355fda4a8121 MY_SNAPSHOT 00:14:11.390 12:18:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=01d68506-de8b-42ea-99dc-c02b217a3d88 00:14:11.390 12:18:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8cb81709-183f-4b98-b609-355fda4a8121 30 00:14:11.651 12:18:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 01d68506-de8b-42ea-99dc-c02b217a3d88 MY_CLONE 00:14:11.651 12:18:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0709ecc3-5b54-4696-9123-31bb016a9e06 00:14:11.651 12:18:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0709ecc3-5b54-4696-9123-31bb016a9e06 00:14:12.220 12:18:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 580170 00:14:22.261 Initializing NVMe Controllers 00:14:22.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:22.261 Controller IO queue size 128, less than required. 00:14:22.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:22.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:22.261 Initialization complete. Launching workers. 00:14:22.261 ======================================================== 00:14:22.261 Latency(us) 00:14:22.261 Device Information : IOPS MiB/s Average min max 00:14:22.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12380.10 48.36 10342.28 1571.51 58405.54 00:14:22.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17847.00 69.71 7173.88 648.11 47058.32 00:14:22.261 ======================================================== 00:14:22.261 Total : 30227.10 118.07 8471.56 648.11 58405.54 00:14:22.261 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8cb81709-183f-4b98-b609-355fda4a8121 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 379e9e18-7432-495f-b99f-ea8ecbaec863 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.261 rmmod nvme_tcp 00:14:22.261 rmmod nvme_fabrics 00:14:22.261 rmmod nvme_keyring 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 579604 ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 579604 ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 579604' 00:14:22.261 killing process with pid 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 579604 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.261 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.262 12:18:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.648 12:18:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.648 00:14:23.648 real 0m24.072s 00:14:23.648 user 1m4.051s 00:14:23.648 sys 0m8.256s 00:14:23.648 12:18:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:23.648 12:18:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:23.648 ************************************ 00:14:23.648 END TEST nvmf_lvol 00:14:23.648 ************************************ 00:14:23.648 12:18:28 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.648 12:18:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:23.648 12:18:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:23.648 12:18:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.648 ************************************ 00:14:23.648 START TEST nvmf_lvs_grow 00:14:23.648 ************************************ 00:14:23.648 12:18:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.648 * Looking for test storage... 00:14:23.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.648 12:18:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.789 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.789 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.789 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:31.790 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:31.790 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:31.790 Found net devices under 0000:31:00.0: cvl_0_0 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:31.790 Found net devices under 0000:31:00.1: cvl_0_1 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.790 12:18:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:14:31.790 00:14:31.790 --- 10.0.0.2 ping statistics --- 00:14:31.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.790 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:14:31.790 00:14:31.790 --- 10.0.0.1 ping statistics --- 00:14:31.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.790 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=587023 00:14:31.790 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 587023 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 587023 ']' 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:31.791 12:18:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.791 [2024-06-10 12:18:37.283731] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:31.791 [2024-06-10 12:18:37.283792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.791 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.791 [2024-06-10 12:18:37.364331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.050 [2024-06-10 12:18:37.438166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.050 [2024-06-10 12:18:37.438208] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.050 [2024-06-10 12:18:37.438217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.050 [2024-06-10 12:18:37.438224] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.050 [2024-06-10 12:18:37.438230] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.050 [2024-06-10 12:18:37.438255] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.620 12:18:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.880 [2024-06-10 12:18:38.241519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:32.880 ************************************ 00:14:32.880 START TEST lvs_grow_clean 00:14:32.880 ************************************ 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.880 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.141 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:33.141 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:33.141 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ac9c019f-1cd6-49df-be16-7c615da573de 00:14:33.141 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:33.141 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:33.401 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:33.401 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:33.401 12:18:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac9c019f-1cd6-49df-be16-7c615da573de lvol 150 00:14:33.662 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2830ec27-9fe9-467b-894e-6f13818a520f 00:14:33.662 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:33.662 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:33.662 [2024-06-10 12:18:39.149295] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:33.662 [2024-06-10 12:18:39.149350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:33.662 true 00:14:33.662 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:33.662 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:33.922 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:33.922 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:33.922 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2830ec27-9fe9-467b-894e-6f13818a520f 00:14:34.182 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.182 [2024-06-10 12:18:39.735081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.182 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.442 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=587584 00:14:34.442 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 587584 /var/tmp/bdevperf.sock 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 587584 ']' 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:34.443 12:18:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:34.443 [2024-06-10 12:18:39.960509] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:34.443 [2024-06-10 12:18:39.960574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587584 ] 00:14:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.443 [2024-06-10 12:18:40.044396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.703 [2024-06-10 12:18:40.108521] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.273 12:18:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:35.273 12:18:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:35.273 12:18:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:35.533 Nvme0n1 00:14:35.533 12:18:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:35.794 [ 00:14:35.794 { 00:14:35.794 "name": "Nvme0n1", 00:14:35.794 "aliases": [ 00:14:35.794 "2830ec27-9fe9-467b-894e-6f13818a520f" 00:14:35.794 ], 00:14:35.794 "product_name": "NVMe disk", 00:14:35.794 "block_size": 4096, 00:14:35.794 "num_blocks": 38912, 00:14:35.794 "uuid": "2830ec27-9fe9-467b-894e-6f13818a520f", 00:14:35.794 "assigned_rate_limits": { 00:14:35.794 "rw_ios_per_sec": 0, 00:14:35.794 "rw_mbytes_per_sec": 0, 00:14:35.794 "r_mbytes_per_sec": 0, 00:14:35.794 "w_mbytes_per_sec": 0 00:14:35.794 }, 00:14:35.794 "claimed": false, 00:14:35.794 "zoned": false, 00:14:35.794 "supported_io_types": { 00:14:35.794 "read": true, 00:14:35.794 "write": true, 00:14:35.794 "unmap": true, 00:14:35.794 "write_zeroes": true, 00:14:35.794 "flush": true, 00:14:35.794 "reset": true, 00:14:35.794 "compare": true, 00:14:35.794 "compare_and_write": true, 00:14:35.794 "abort": true, 00:14:35.794 "nvme_admin": true, 00:14:35.794 "nvme_io": true 00:14:35.794 }, 00:14:35.794 "memory_domains": [ 00:14:35.794 { 00:14:35.794 "dma_device_id": "system", 00:14:35.794 "dma_device_type": 1 00:14:35.794 } 00:14:35.794 ], 00:14:35.794 "driver_specific": { 00:14:35.794 "nvme": [ 00:14:35.794 { 00:14:35.794 "trid": { 00:14:35.794 "trtype": "TCP", 00:14:35.794 "adrfam": "IPv4", 00:14:35.794 "traddr": "10.0.0.2", 00:14:35.794 "trsvcid": "4420", 00:14:35.794 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:35.794 }, 00:14:35.794 "ctrlr_data": { 00:14:35.794 "cntlid": 1, 00:14:35.794 "vendor_id": "0x8086", 00:14:35.794 "model_number": "SPDK bdev Controller", 00:14:35.794 "serial_number": "SPDK0", 00:14:35.794 "firmware_revision": "24.09", 00:14:35.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.794 "oacs": { 00:14:35.794 "security": 0, 00:14:35.794 "format": 0, 00:14:35.794 "firmware": 0, 00:14:35.794 "ns_manage": 0 00:14:35.794 }, 00:14:35.794 "multi_ctrlr": true, 00:14:35.794 "ana_reporting": false 00:14:35.794 }, 00:14:35.794 "vs": { 00:14:35.794 "nvme_version": "1.3" 00:14:35.794 }, 00:14:35.794 "ns_data": { 00:14:35.794 "id": 1, 00:14:35.794 "can_share": true 00:14:35.794 } 00:14:35.794 } 00:14:35.794 ], 00:14:35.794 "mp_policy": "active_passive" 00:14:35.794 } 00:14:35.794 } 00:14:35.794 ] 00:14:35.794 12:18:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=587916 00:14:35.794 12:18:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:35.794 12:18:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:35.794 Running I/O for 10 seconds... 00:14:37.176 Latency(us) 00:14:37.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.176 Nvme0n1 : 1.00 18050.00 70.51 0.00 0.00 0.00 0.00 0.00 00:14:37.176 =================================================================================================================== 00:14:37.176 Total : 18050.00 70.51 0.00 0.00 0.00 0.00 0.00 00:14:37.176 00:14:37.746 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:38.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.006 Nvme0n1 : 2.00 18175.50 71.00 0.00 0.00 0.00 0.00 0.00 00:14:38.006 =================================================================================================================== 00:14:38.006 Total : 18175.50 71.00 0.00 0.00 0.00 0.00 0.00 00:14:38.006 00:14:38.007 true 00:14:38.007 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:38.007 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:38.267 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:38.267 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:38.267 12:18:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 587916 00:14:38.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.837 Nvme0n1 : 3.00 18218.33 71.17 0.00 0.00 0.00 0.00 0.00 00:14:38.837 =================================================================================================================== 00:14:38.837 Total : 18218.33 71.17 0.00 0.00 0.00 0.00 0.00 00:14:38.837 00:14:39.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.780 Nvme0n1 : 4.00 18255.00 71.31 0.00 0.00 0.00 0.00 0.00 00:14:39.780 =================================================================================================================== 00:14:39.780 Total : 18255.00 71.31 0.00 0.00 0.00 0.00 0.00 00:14:39.780 00:14:41.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.164 Nvme0n1 : 5.00 18276.20 71.39 0.00 0.00 0.00 0.00 0.00 00:14:41.164 =================================================================================================================== 00:14:41.164 Total : 18276.20 71.39 0.00 0.00 0.00 0.00 0.00 00:14:41.164 00:14:42.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.106 Nvme0n1 : 6.00 18301.33 71.49 0.00 0.00 0.00 0.00 0.00 00:14:42.106 =================================================================================================================== 00:14:42.106 Total : 18301.33 71.49 0.00 0.00 0.00 0.00 0.00 00:14:42.106 00:14:43.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.069 Nvme0n1 : 7.00 18310.43 71.53 0.00 0.00 0.00 0.00 0.00 00:14:43.069 =================================================================================================================== 00:14:43.069 Total : 18310.43 71.53 0.00 0.00 0.00 0.00 0.00 00:14:43.069 00:14:44.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.009 Nvme0n1 : 8.00 18325.00 71.58 0.00 0.00 0.00 0.00 0.00 00:14:44.009 =================================================================================================================== 00:14:44.009 Total : 18325.00 71.58 0.00 0.00 0.00 0.00 0.00 00:14:44.009 00:14:44.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.951 Nvme0n1 : 9.00 18336.22 71.63 0.00 0.00 0.00 0.00 0.00 00:14:44.951 =================================================================================================================== 00:14:44.951 Total : 18336.22 71.63 0.00 0.00 0.00 0.00 0.00 00:14:44.951 00:14:45.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.955 Nvme0n1 : 10.00 18345.00 71.66 0.00 0.00 0.00 0.00 0.00 00:14:45.955 =================================================================================================================== 00:14:45.955 Total : 18345.00 71.66 0.00 0.00 0.00 0.00 0.00 00:14:45.955 00:14:45.955 00:14:45.955 Latency(us) 00:14:45.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.955 Nvme0n1 : 10.00 18344.10 71.66 0.00 0.00 6974.28 4287.15 12506.45 00:14:45.955 =================================================================================================================== 00:14:45.955 Total : 18344.10 71.66 0.00 0.00 6974.28 4287.15 12506.45 00:14:45.955 0 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 587584 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 587584 ']' 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 587584 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 587584 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 587584' 00:14:45.955 killing process with pid 587584 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 587584 00:14:45.955 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.955 00:14:45.955 Latency(us) 00:14:45.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.955 =================================================================================================================== 00:14:45.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.955 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 587584 00:14:46.215 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.215 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.474 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:46.474 12:18:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:46.474 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:46.474 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:46.474 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.733 [2024-06-10 12:18:52.209729] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:46.733 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:46.733 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:46.734 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:46.994 request: 00:14:46.994 { 00:14:46.994 "uuid": "ac9c019f-1cd6-49df-be16-7c615da573de", 00:14:46.994 "method": "bdev_lvol_get_lvstores", 00:14:46.994 "req_id": 1 00:14:46.994 } 00:14:46.994 Got JSON-RPC error response 00:14:46.994 response: 00:14:46.994 { 00:14:46.994 "code": -19, 00:14:46.994 "message": "No such device" 00:14:46.994 } 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.994 aio_bdev 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2830ec27-9fe9-467b-894e-6f13818a520f 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=2830ec27-9fe9-467b-894e-6f13818a520f 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:46.994 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:47.254 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2830ec27-9fe9-467b-894e-6f13818a520f -t 2000 00:14:47.254 [ 00:14:47.254 { 00:14:47.254 "name": "2830ec27-9fe9-467b-894e-6f13818a520f", 00:14:47.254 "aliases": [ 00:14:47.254 "lvs/lvol" 00:14:47.254 ], 00:14:47.254 "product_name": "Logical Volume", 00:14:47.254 "block_size": 4096, 00:14:47.254 "num_blocks": 38912, 00:14:47.254 "uuid": "2830ec27-9fe9-467b-894e-6f13818a520f", 00:14:47.254 "assigned_rate_limits": { 00:14:47.254 "rw_ios_per_sec": 0, 00:14:47.254 "rw_mbytes_per_sec": 0, 00:14:47.254 "r_mbytes_per_sec": 0, 00:14:47.254 "w_mbytes_per_sec": 0 00:14:47.254 }, 00:14:47.254 "claimed": false, 00:14:47.254 "zoned": false, 00:14:47.254 "supported_io_types": { 00:14:47.254 "read": true, 00:14:47.254 "write": true, 00:14:47.254 "unmap": true, 00:14:47.254 "write_zeroes": true, 00:14:47.254 "flush": false, 00:14:47.254 "reset": true, 00:14:47.254 "compare": false, 00:14:47.254 "compare_and_write": false, 00:14:47.254 "abort": false, 00:14:47.254 "nvme_admin": false, 00:14:47.254 "nvme_io": false 00:14:47.254 }, 00:14:47.254 "driver_specific": { 00:14:47.254 "lvol": { 00:14:47.254 "lvol_store_uuid": "ac9c019f-1cd6-49df-be16-7c615da573de", 00:14:47.254 "base_bdev": "aio_bdev", 00:14:47.254 "thin_provision": false, 00:14:47.254 "num_allocated_clusters": 38, 00:14:47.254 "snapshot": false, 00:14:47.255 "clone": false, 00:14:47.255 "esnap_clone": false 00:14:47.255 } 00:14:47.255 } 00:14:47.255 } 00:14:47.255 ] 00:14:47.255 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:14:47.255 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:47.255 12:18:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:47.515 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:47.515 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:47.515 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:47.776 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:47.776 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2830ec27-9fe9-467b-894e-6f13818a520f 00:14:47.776 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac9c019f-1cd6-49df-be16-7c615da573de 00:14:48.036 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.036 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.036 00:14:48.036 real 0m15.332s 00:14:48.036 user 0m15.085s 00:14:48.036 sys 0m1.266s 00:14:48.036 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:48.036 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:48.036 ************************************ 00:14:48.036 END TEST lvs_grow_clean 00:14:48.036 ************************************ 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.296 ************************************ 00:14:48.296 START TEST lvs_grow_dirty 00:14:48.296 ************************************ 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.296 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.556 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:48.556 12:18:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:48.556 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=446ff718-8db6-4c90-bf2f-aedd9ce72260 00:14:48.556 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:14:48.556 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 lvol 150 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=98b5ffa4-189a-4568-a34d-869233fa702a 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.817 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:49.078 [2024-06-10 12:18:54.512773] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:49.078 [2024-06-10 12:18:54.512828] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:49.078 true 00:14:49.078 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:49.078 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:14:49.078 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:49.078 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:49.339 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98b5ffa4-189a-4568-a34d-869233fa702a 00:14:49.598 12:18:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.598 [2024-06-10 12:18:55.098581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.598 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=590664 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 590664 /var/tmp/bdevperf.sock 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 590664 ']' 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:49.859 12:18:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:49.859 [2024-06-10 12:18:55.311579] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:49.859 [2024-06-10 12:18:55.311629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590664 ] 00:14:49.859 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.859 [2024-06-10 12:18:55.392618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.859 [2024-06-10 12:18:55.445971] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.801 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:50.801 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:50.801 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:50.801 Nvme0n1 00:14:50.801 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:51.062 [ 00:14:51.062 { 00:14:51.062 "name": "Nvme0n1", 00:14:51.062 "aliases": [ 00:14:51.062 "98b5ffa4-189a-4568-a34d-869233fa702a" 00:14:51.062 ], 00:14:51.062 "product_name": "NVMe disk", 00:14:51.062 "block_size": 4096, 00:14:51.062 "num_blocks": 38912, 00:14:51.062 "uuid": "98b5ffa4-189a-4568-a34d-869233fa702a", 00:14:51.062 "assigned_rate_limits": { 00:14:51.062 "rw_ios_per_sec": 0, 00:14:51.062 "rw_mbytes_per_sec": 0, 00:14:51.062 "r_mbytes_per_sec": 0, 00:14:51.062 "w_mbytes_per_sec": 0 00:14:51.062 }, 00:14:51.062 "claimed": false, 00:14:51.062 "zoned": false, 00:14:51.062 "supported_io_types": { 00:14:51.062 "read": true, 00:14:51.062 "write": true, 00:14:51.062 "unmap": true, 00:14:51.062 "write_zeroes": true, 00:14:51.062 "flush": true, 00:14:51.062 "reset": true, 00:14:51.062 "compare": true, 00:14:51.062 "compare_and_write": true, 00:14:51.062 "abort": true, 00:14:51.062 "nvme_admin": true, 00:14:51.062 "nvme_io": true 00:14:51.062 }, 00:14:51.062 "memory_domains": [ 00:14:51.062 { 00:14:51.062 "dma_device_id": "system", 00:14:51.062 "dma_device_type": 1 00:14:51.063 } 00:14:51.063 ], 00:14:51.063 "driver_specific": { 00:14:51.063 "nvme": [ 00:14:51.063 { 00:14:51.063 "trid": { 00:14:51.063 "trtype": "TCP", 00:14:51.063 "adrfam": "IPv4", 00:14:51.063 "traddr": "10.0.0.2", 00:14:51.063 "trsvcid": "4420", 00:14:51.063 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:51.063 }, 00:14:51.063 "ctrlr_data": { 00:14:51.063 "cntlid": 1, 00:14:51.063 "vendor_id": "0x8086", 00:14:51.063 "model_number": "SPDK bdev Controller", 00:14:51.063 "serial_number": "SPDK0", 00:14:51.063 "firmware_revision": "24.09", 00:14:51.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.063 "oacs": { 00:14:51.063 "security": 0, 00:14:51.063 "format": 0, 00:14:51.063 "firmware": 0, 00:14:51.063 "ns_manage": 0 00:14:51.063 }, 00:14:51.063 "multi_ctrlr": true, 00:14:51.063 "ana_reporting": false 00:14:51.063 }, 00:14:51.063 "vs": { 00:14:51.063 "nvme_version": "1.3" 00:14:51.063 }, 00:14:51.063 "ns_data": { 00:14:51.063 "id": 1, 00:14:51.063 "can_share": true 00:14:51.063 } 00:14:51.063 } 00:14:51.063 ], 00:14:51.063 "mp_policy": "active_passive" 00:14:51.063 } 00:14:51.063 } 00:14:51.063 ] 00:14:51.063 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=590980 00:14:51.063 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:51.063 12:18:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.063 Running I/O for 10 seconds... 00:14:52.004 Latency(us) 00:14:52.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.004 Nvme0n1 : 1.00 18130.00 70.82 0.00 0.00 0.00 0.00 0.00 00:14:52.004 =================================================================================================================== 00:14:52.004 Total : 18130.00 70.82 0.00 0.00 0.00 0.00 0.00 00:14:52.004 00:14:52.947 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:14:52.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.947 Nvme0n1 : 2.00 18178.50 71.01 0.00 0.00 0.00 0.00 0.00 00:14:52.947 =================================================================================================================== 00:14:52.947 Total : 18178.50 71.01 0.00 0.00 0.00 0.00 0.00 00:14:52.947 00:14:53.207 true 00:14:53.207 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:14:53.207 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:53.208 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:53.208 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:53.208 12:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 590980 00:14:54.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.148 Nvme0n1 : 3.00 18217.67 71.16 0.00 0.00 0.00 0.00 0.00 00:14:54.148 =================================================================================================================== 00:14:54.148 Total : 18217.67 71.16 0.00 0.00 0.00 0.00 0.00 00:14:54.148 00:14:55.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.091 Nvme0n1 : 4.00 18236.50 71.24 0.00 0.00 0.00 0.00 0.00 00:14:55.091 =================================================================================================================== 00:14:55.091 Total : 18236.50 71.24 0.00 0.00 0.00 0.00 0.00 00:14:55.091 00:14:56.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.032 Nvme0n1 : 5.00 18261.60 71.33 0.00 0.00 0.00 0.00 0.00 00:14:56.032 =================================================================================================================== 00:14:56.032 Total : 18261.60 71.33 0.00 0.00 0.00 0.00 0.00 00:14:56.032 00:14:56.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.974 Nvme0n1 : 6.00 18283.33 71.42 0.00 0.00 0.00 0.00 0.00 00:14:56.974 =================================================================================================================== 00:14:56.974 Total : 18283.33 71.42 0.00 0.00 0.00 0.00 0.00 00:14:56.974 00:14:58.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.357 Nvme0n1 : 7.00 18295.00 71.46 0.00 0.00 0.00 0.00 0.00 00:14:58.357 =================================================================================================================== 00:14:58.357 Total : 18295.00 71.46 0.00 0.00 0.00 0.00 0.00 00:14:58.357 00:14:59.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.299 Nvme0n1 : 8.00 18311.50 71.53 0.00 0.00 0.00 0.00 0.00 00:14:59.299 =================================================================================================================== 00:14:59.299 Total : 18311.50 71.53 0.00 0.00 0.00 0.00 0.00 00:14:59.299 00:15:00.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.239 Nvme0n1 : 9.00 18316.56 71.55 0.00 0.00 0.00 0.00 0.00 00:15:00.239 =================================================================================================================== 00:15:00.239 Total : 18316.56 71.55 0.00 0.00 0.00 0.00 0.00 00:15:00.239 00:15:01.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.274 Nvme0n1 : 10.00 18327.70 71.59 0.00 0.00 0.00 0.00 0.00 00:15:01.274 =================================================================================================================== 00:15:01.274 Total : 18327.70 71.59 0.00 0.00 0.00 0.00 0.00 00:15:01.274 00:15:01.274 00:15:01.274 Latency(us) 00:15:01.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.274 Nvme0n1 : 10.01 18330.02 71.60 0.00 0.00 6980.09 1583.79 12561.07 00:15:01.274 =================================================================================================================== 00:15:01.274 Total : 18330.02 71.60 0.00 0.00 6980.09 1583.79 12561.07 00:15:01.274 0 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 590664 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 590664 ']' 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 590664 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 590664 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 590664' 00:15:01.274 killing process with pid 590664 00:15:01.274 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 590664 00:15:01.274 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.274 00:15:01.275 Latency(us) 00:15:01.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.275 =================================================================================================================== 00:15:01.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.275 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 590664 00:15:01.275 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.534 12:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.534 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:01.534 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 587023 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 587023 00:15:01.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 587023 Killed "${NVMF_APP[@]}" "$@" 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=593023 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 593023 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 593023 ']' 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:01.795 12:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 [2024-06-10 12:19:07.336801] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:01.795 [2024-06-10 12:19:07.336857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.795 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.055 [2024-06-10 12:19:07.410077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.055 [2024-06-10 12:19:07.476347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.055 [2024-06-10 12:19:07.476382] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.055 [2024-06-10 12:19:07.476389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.055 [2024-06-10 12:19:07.476395] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.055 [2024-06-10 12:19:07.476401] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.055 [2024-06-10 12:19:07.476419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.627 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.888 [2024-06-10 12:19:08.264626] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:02.888 [2024-06-10 12:19:08.264714] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:02.888 [2024-06-10 12:19:08.264743] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 98b5ffa4-189a-4568-a34d-869233fa702a 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=98b5ffa4-189a-4568-a34d-869233fa702a 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:02.888 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:02.889 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:02.889 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98b5ffa4-189a-4568-a34d-869233fa702a -t 2000 00:15:03.149 [ 00:15:03.149 { 00:15:03.149 "name": "98b5ffa4-189a-4568-a34d-869233fa702a", 00:15:03.149 "aliases": [ 00:15:03.149 "lvs/lvol" 00:15:03.149 ], 00:15:03.149 "product_name": "Logical Volume", 00:15:03.149 "block_size": 4096, 00:15:03.149 "num_blocks": 38912, 00:15:03.149 "uuid": "98b5ffa4-189a-4568-a34d-869233fa702a", 00:15:03.149 "assigned_rate_limits": { 00:15:03.149 "rw_ios_per_sec": 0, 00:15:03.149 "rw_mbytes_per_sec": 0, 00:15:03.149 "r_mbytes_per_sec": 0, 00:15:03.149 "w_mbytes_per_sec": 0 00:15:03.149 }, 00:15:03.149 "claimed": false, 00:15:03.149 "zoned": false, 00:15:03.149 "supported_io_types": { 00:15:03.149 "read": true, 00:15:03.149 "write": true, 00:15:03.149 "unmap": true, 00:15:03.149 "write_zeroes": true, 00:15:03.149 "flush": false, 00:15:03.149 "reset": true, 00:15:03.149 "compare": false, 00:15:03.149 "compare_and_write": false, 00:15:03.149 "abort": false, 00:15:03.149 "nvme_admin": false, 00:15:03.149 "nvme_io": false 00:15:03.149 }, 00:15:03.149 "driver_specific": { 00:15:03.149 "lvol": { 00:15:03.149 "lvol_store_uuid": "446ff718-8db6-4c90-bf2f-aedd9ce72260", 00:15:03.149 "base_bdev": "aio_bdev", 00:15:03.149 "thin_provision": false, 00:15:03.149 "num_allocated_clusters": 38, 00:15:03.149 "snapshot": false, 00:15:03.149 "clone": false, 00:15:03.149 "esnap_clone": false 00:15:03.149 } 00:15:03.149 } 00:15:03.149 } 00:15:03.149 ] 00:15:03.149 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:03.149 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:03.149 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:03.149 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:03.149 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:03.150 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:03.409 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:03.409 12:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.670 [2024-06-10 12:19:09.028586] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:03.670 request: 00:15:03.670 { 00:15:03.670 "uuid": "446ff718-8db6-4c90-bf2f-aedd9ce72260", 00:15:03.670 "method": "bdev_lvol_get_lvstores", 00:15:03.670 "req_id": 1 00:15:03.670 } 00:15:03.670 Got JSON-RPC error response 00:15:03.670 response: 00:15:03.670 { 00:15:03.670 "code": -19, 00:15:03.670 "message": "No such device" 00:15:03.670 } 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:03.670 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.977 aio_bdev 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 98b5ffa4-189a-4568-a34d-869233fa702a 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=98b5ffa4-189a-4568-a34d-869233fa702a 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.977 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98b5ffa4-189a-4568-a34d-869233fa702a -t 2000 00:15:04.260 [ 00:15:04.260 { 00:15:04.260 "name": "98b5ffa4-189a-4568-a34d-869233fa702a", 00:15:04.260 "aliases": [ 00:15:04.260 "lvs/lvol" 00:15:04.260 ], 00:15:04.260 "product_name": "Logical Volume", 00:15:04.260 "block_size": 4096, 00:15:04.260 "num_blocks": 38912, 00:15:04.260 "uuid": "98b5ffa4-189a-4568-a34d-869233fa702a", 00:15:04.260 "assigned_rate_limits": { 00:15:04.260 "rw_ios_per_sec": 0, 00:15:04.260 "rw_mbytes_per_sec": 0, 00:15:04.260 "r_mbytes_per_sec": 0, 00:15:04.260 "w_mbytes_per_sec": 0 00:15:04.260 }, 00:15:04.260 "claimed": false, 00:15:04.260 "zoned": false, 00:15:04.260 "supported_io_types": { 00:15:04.260 "read": true, 00:15:04.260 "write": true, 00:15:04.260 "unmap": true, 00:15:04.260 "write_zeroes": true, 00:15:04.260 "flush": false, 00:15:04.260 "reset": true, 00:15:04.260 "compare": false, 00:15:04.260 "compare_and_write": false, 00:15:04.260 "abort": false, 00:15:04.260 "nvme_admin": false, 00:15:04.260 "nvme_io": false 00:15:04.260 }, 00:15:04.260 "driver_specific": { 00:15:04.260 "lvol": { 00:15:04.260 "lvol_store_uuid": "446ff718-8db6-4c90-bf2f-aedd9ce72260", 00:15:04.260 "base_bdev": "aio_bdev", 00:15:04.260 "thin_provision": false, 00:15:04.260 "num_allocated_clusters": 38, 00:15:04.260 "snapshot": false, 00:15:04.260 "clone": false, 00:15:04.260 "esnap_clone": false 00:15:04.260 } 00:15:04.260 } 00:15:04.260 } 00:15:04.260 ] 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:04.260 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:04.521 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:04.521 12:19:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98b5ffa4-189a-4568-a34d-869233fa702a 00:15:04.521 12:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 446ff718-8db6-4c90-bf2f-aedd9ce72260 00:15:04.783 12:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:05.044 12:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:05.044 00:15:05.044 real 0m16.755s 00:15:05.044 user 0m44.183s 00:15:05.044 sys 0m2.791s 00:15:05.044 12:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:05.044 12:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:05.044 ************************************ 00:15:05.044 END TEST lvs_grow_dirty 00:15:05.045 ************************************ 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:05.045 nvmf_trace.0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.045 rmmod nvme_tcp 00:15:05.045 rmmod nvme_fabrics 00:15:05.045 rmmod nvme_keyring 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 593023 ']' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 593023 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 593023 ']' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 593023 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:05.045 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 593023 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 593023' 00:15:05.305 killing process with pid 593023 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 593023 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 593023 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.305 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.306 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.306 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.306 12:19:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.306 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.306 12:19:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.851 12:19:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.851 00:15:07.851 real 0m43.934s 00:15:07.851 user 1m5.429s 00:15:07.851 sys 0m10.497s 00:15:07.851 12:19:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:07.851 12:19:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:07.851 ************************************ 00:15:07.851 END TEST nvmf_lvs_grow 00:15:07.851 ************************************ 00:15:07.852 12:19:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.852 12:19:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:07.852 12:19:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:07.852 12:19:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.852 ************************************ 00:15:07.852 START TEST nvmf_bdev_io_wait 00:15:07.852 ************************************ 00:15:07.852 12:19:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.852 * Looking for test storage... 00:15:07.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.852 12:19:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.993 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:15.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:15.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:15.994 Found net devices under 0000:31:00.0: cvl_0_0 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:15.994 Found net devices under 0000:31:00.1: cvl_0_1 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.994 12:19:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:15:15.994 00:15:15.994 --- 10.0.0.2 ping statistics --- 00:15:15.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.994 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:15:15.994 00:15:15.994 --- 10.0.0.1 ping statistics --- 00:15:15.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.994 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.994 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=598433 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 598433 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 598433 ']' 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 [2024-06-10 12:19:21.300958] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:15.995 [2024-06-10 12:19:21.301003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.995 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.995 [2024-06-10 12:19:21.375169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.995 [2024-06-10 12:19:21.441347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.995 [2024-06-10 12:19:21.441383] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.995 [2024-06-10 12:19:21.441390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.995 [2024-06-10 12:19:21.441397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.995 [2024-06-10 12:19:21.441402] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.995 [2024-06-10 12:19:21.441539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.995 [2024-06-10 12:19:21.441656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.995 [2024-06-10 12:19:21.441816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.995 [2024-06-10 12:19:21.441816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.995 [2024-06-10 12:19:21.567904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.995 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:16.257 Malloc0 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:16.257 [2024-06-10 12:19:21.633523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=598462 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=598464 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:16.257 { 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme$subsystem", 00:15:16.257 "trtype": "$TEST_TRANSPORT", 00:15:16.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "$NVMF_PORT", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:16.257 "hdgst": ${hdgst:-false}, 00:15:16.257 "ddgst": ${ddgst:-false} 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 } 00:15:16.257 EOF 00:15:16.257 )") 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=598466 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:16.257 { 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme$subsystem", 00:15:16.257 "trtype": "$TEST_TRANSPORT", 00:15:16.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "$NVMF_PORT", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:16.257 "hdgst": ${hdgst:-false}, 00:15:16.257 "ddgst": ${ddgst:-false} 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 } 00:15:16.257 EOF 00:15:16.257 )") 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=598469 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:16.257 { 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme$subsystem", 00:15:16.257 "trtype": "$TEST_TRANSPORT", 00:15:16.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "$NVMF_PORT", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:16.257 "hdgst": ${hdgst:-false}, 00:15:16.257 "ddgst": ${ddgst:-false} 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 } 00:15:16.257 EOF 00:15:16.257 )") 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:16.257 { 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme$subsystem", 00:15:16.257 "trtype": "$TEST_TRANSPORT", 00:15:16.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "$NVMF_PORT", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:16.257 "hdgst": ${hdgst:-false}, 00:15:16.257 "ddgst": ${ddgst:-false} 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 } 00:15:16.257 EOF 00:15:16.257 )") 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 598462 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme1", 00:15:16.257 "trtype": "tcp", 00:15:16.257 "traddr": "10.0.0.2", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "4420", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.257 "hdgst": false, 00:15:16.257 "ddgst": false 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 }' 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme1", 00:15:16.257 "trtype": "tcp", 00:15:16.257 "traddr": "10.0.0.2", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "4420", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.257 "hdgst": false, 00:15:16.257 "ddgst": false 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 }' 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme1", 00:15:16.257 "trtype": "tcp", 00:15:16.257 "traddr": "10.0.0.2", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "4420", 00:15:16.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.257 "hdgst": false, 00:15:16.257 "ddgst": false 00:15:16.257 }, 00:15:16.257 "method": "bdev_nvme_attach_controller" 00:15:16.257 }' 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:16.257 12:19:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:16.257 "params": { 00:15:16.257 "name": "Nvme1", 00:15:16.257 "trtype": "tcp", 00:15:16.257 "traddr": "10.0.0.2", 00:15:16.257 "adrfam": "ipv4", 00:15:16.257 "trsvcid": "4420", 00:15:16.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.258 "hdgst": false, 00:15:16.258 "ddgst": false 00:15:16.258 }, 00:15:16.258 "method": "bdev_nvme_attach_controller" 00:15:16.258 }' 00:15:16.258 [2024-06-10 12:19:21.685471] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:16.258 [2024-06-10 12:19:21.685519] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:16.258 [2024-06-10 12:19:21.688865] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:16.258 [2024-06-10 12:19:21.688912] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:16.258 [2024-06-10 12:19:21.690390] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:16.258 [2024-06-10 12:19:21.690436] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:16.258 [2024-06-10 12:19:21.690996] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:16.258 [2024-06-10 12:19:21.691037] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:16.258 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.258 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.258 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.258 [2024-06-10 12:19:21.845921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.519 [2024-06-10 12:19:21.890397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.519 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.520 [2024-06-10 12:19:21.897869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:16.520 [2024-06-10 12:19:21.941530] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:16.520 [2024-06-10 12:19:21.942577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.520 [2024-06-10 12:19:21.986790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.520 [2024-06-10 12:19:21.991912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:16.520 [2024-06-10 12:19:22.035900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:16.520 Running I/O for 1 seconds... 00:15:16.520 Running I/O for 1 seconds... 00:15:16.779 Running I/O for 1 seconds... 00:15:16.779 Running I/O for 1 seconds... 00:15:17.721 00:15:17.721 Latency(us) 00:15:17.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.721 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:17.721 Nvme1n1 : 1.00 14698.36 57.42 0.00 0.00 8684.94 4505.60 18677.76 00:15:17.721 =================================================================================================================== 00:15:17.721 Total : 14698.36 57.42 0.00 0.00 8684.94 4505.60 18677.76 00:15:17.721 00:15:17.721 Latency(us) 00:15:17.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.721 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:17.721 Nvme1n1 : 1.00 188355.16 735.76 0.00 0.00 676.60 273.07 754.35 00:15:17.721 =================================================================================================================== 00:15:17.721 Total : 188355.16 735.76 0.00 0.00 676.60 273.07 754.35 00:15:17.721 00:15:17.721 Latency(us) 00:15:17.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.721 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:17.721 Nvme1n1 : 1.01 11908.29 46.52 0.00 0.00 10713.94 2402.99 15510.19 00:15:17.721 =================================================================================================================== 00:15:17.721 Total : 11908.29 46.52 0.00 0.00 10713.94 2402.99 15510.19 00:15:17.721 00:15:17.721 Latency(us) 00:15:17.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.721 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:17.721 Nvme1n1 : 1.01 11705.32 45.72 0.00 0.00 10895.90 6171.31 22937.60 00:15:17.721 =================================================================================================================== 00:15:17.721 Total : 11705.32 45.72 0.00 0.00 10895.90 6171.31 22937.60 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 598464 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 598466 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 598469 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.981 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.982 rmmod nvme_tcp 00:15:17.982 rmmod nvme_fabrics 00:15:17.982 rmmod nvme_keyring 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 598433 ']' 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 598433 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 598433 ']' 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 598433 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 598433 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 598433' 00:15:17.982 killing process with pid 598433 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 598433 00:15:17.982 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 598433 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.243 12:19:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.154 12:19:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.155 00:15:20.155 real 0m12.763s 00:15:20.155 user 0m16.269s 00:15:20.155 sys 0m7.385s 00:15:20.155 12:19:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:20.155 12:19:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.155 ************************************ 00:15:20.155 END TEST nvmf_bdev_io_wait 00:15:20.155 ************************************ 00:15:20.155 12:19:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:20.155 12:19:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:20.155 12:19:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:20.415 12:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.415 ************************************ 00:15:20.415 START TEST nvmf_queue_depth 00:15:20.415 ************************************ 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:20.415 * Looking for test storage... 00:15:20.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.415 12:19:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:28.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:28.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:28.554 Found net devices under 0000:31:00.0: cvl_0_0 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:28.554 Found net devices under 0000:31:00.1: cvl_0_1 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.554 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.555 12:19:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.555 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:28.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.787 ms 00:15:28.817 00:15:28.817 --- 10.0.0.2 ping statistics --- 00:15:28.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.817 rtt min/avg/max/mdev = 0.787/0.787/0.787/0.000 ms 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:15:28.817 00:15:28.817 --- 10.0.0.1 ping statistics --- 00:15:28.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.817 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=603538 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 603538 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 603538 ']' 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:28.817 12:19:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.817 [2024-06-10 12:19:34.263127] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:28.817 [2024-06-10 12:19:34.263189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.817 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.817 [2024-06-10 12:19:34.360864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.078 [2024-06-10 12:19:34.455192] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.078 [2024-06-10 12:19:34.455259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.078 [2024-06-10 12:19:34.455267] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.078 [2024-06-10 12:19:34.455274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.078 [2024-06-10 12:19:34.455280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.078 [2024-06-10 12:19:34.455308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 [2024-06-10 12:19:35.095383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 Malloc0 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 [2024-06-10 12:19:35.149115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=603847 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 603847 /var/tmp/bdevperf.sock 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 603847 ']' 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:29.652 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.652 [2024-06-10 12:19:35.203233] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:29.652 [2024-06-10 12:19:35.203296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603847 ] 00:15:29.652 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.913 [2024-06-10 12:19:35.274116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.913 [2024-06-10 12:19:35.348618] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.484 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:30.484 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:30.484 12:19:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.484 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.484 12:19:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.745 NVMe0n1 00:15:30.745 12:19:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.745 12:19:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.745 Running I/O for 10 seconds... 00:15:40.828 00:15:40.828 Latency(us) 00:15:40.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.828 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:40.828 Verification LBA range: start 0x0 length 0x4000 00:15:40.828 NVMe0n1 : 10.06 11405.92 44.55 0.00 0.00 89475.52 20534.61 74274.13 00:15:40.828 =================================================================================================================== 00:15:40.828 Total : 11405.92 44.55 0.00 0.00 89475.52 20534.61 74274.13 00:15:40.828 0 00:15:40.828 12:19:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 603847 00:15:40.828 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 603847 ']' 00:15:40.828 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 603847 00:15:40.828 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 603847 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 603847' 00:15:41.089 killing process with pid 603847 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 603847 00:15:41.089 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.089 00:15:41.089 Latency(us) 00:15:41.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.089 =================================================================================================================== 00:15:41.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 603847 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.089 rmmod nvme_tcp 00:15:41.089 rmmod nvme_fabrics 00:15:41.089 rmmod nvme_keyring 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 603538 ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 603538 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 603538 ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 603538 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:41.089 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 603538 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 603538' 00:15:41.350 killing process with pid 603538 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 603538 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 603538 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.350 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.351 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.351 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.351 12:19:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.351 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.351 12:19:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.894 12:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.894 00:15:43.894 real 0m23.141s 00:15:43.894 user 0m26.062s 00:15:43.894 sys 0m7.303s 00:15:43.894 12:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:43.894 12:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.894 ************************************ 00:15:43.894 END TEST nvmf_queue_depth 00:15:43.894 ************************************ 00:15:43.894 12:19:48 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:43.894 12:19:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:43.894 12:19:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:43.894 12:19:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.894 ************************************ 00:15:43.894 START TEST nvmf_target_multipath 00:15:43.894 ************************************ 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:43.894 * Looking for test storage... 00:15:43.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.894 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.895 12:19:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:52.037 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:52.037 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:52.037 Found net devices under 0000:31:00.0: cvl_0_0 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:52.037 Found net devices under 0000:31:00.1: cvl_0_1 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.037 12:19:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:15:52.037 00:15:52.037 --- 10.0.0.2 ping statistics --- 00:15:52.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.037 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:15:52.037 00:15:52.037 --- 10.0.0.1 ping statistics --- 00:15:52.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.037 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.037 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:52.038 only one NIC for nvmf test 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.038 rmmod nvme_tcp 00:15:52.038 rmmod nvme_fabrics 00:15:52.038 rmmod nvme_keyring 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.038 12:19:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.950 00:15:53.950 real 0m10.357s 00:15:53.950 user 0m2.296s 00:15:53.950 sys 0m5.959s 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:53.950 12:19:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:53.950 ************************************ 00:15:53.950 END TEST nvmf_target_multipath 00:15:53.950 ************************************ 00:15:53.950 12:19:59 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:53.950 12:19:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:53.950 12:19:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:53.950 12:19:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.950 ************************************ 00:15:53.950 START TEST nvmf_zcopy 00:15:53.950 ************************************ 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:53.950 * Looking for test storage... 00:15:53.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.950 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.211 12:19:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:02.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.345 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:02.346 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:02.346 Found net devices under 0000:31:00.0: cvl_0_0 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:02.346 Found net devices under 0000:31:00.1: cvl_0_1 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:16:02.346 00:16:02.346 --- 10.0.0.2 ping statistics --- 00:16:02.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.346 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:16:02.346 00:16:02.346 --- 10.0.0.1 ping statistics --- 00:16:02.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.346 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=615528 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 615528 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 615528 ']' 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:02.346 12:20:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.346 [2024-06-10 12:20:07.763868] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:02.346 [2024-06-10 12:20:07.763914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.346 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.346 [2024-06-10 12:20:07.852432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.346 [2024-06-10 12:20:07.926795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.346 [2024-06-10 12:20:07.926845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.346 [2024-06-10 12:20:07.926854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.346 [2024-06-10 12:20:07.926860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.346 [2024-06-10 12:20:07.926866] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.346 [2024-06-10 12:20:07.926892] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 [2024-06-10 12:20:08.589753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 [2024-06-10 12:20:08.614002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 malloc0 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:03.289 { 00:16:03.289 "params": { 00:16:03.289 "name": "Nvme$subsystem", 00:16:03.289 "trtype": "$TEST_TRANSPORT", 00:16:03.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.289 "adrfam": "ipv4", 00:16:03.289 "trsvcid": "$NVMF_PORT", 00:16:03.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.289 "hdgst": ${hdgst:-false}, 00:16:03.289 "ddgst": ${ddgst:-false} 00:16:03.289 }, 00:16:03.289 "method": "bdev_nvme_attach_controller" 00:16:03.289 } 00:16:03.289 EOF 00:16:03.289 )") 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:03.289 12:20:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:03.289 "params": { 00:16:03.289 "name": "Nvme1", 00:16:03.289 "trtype": "tcp", 00:16:03.289 "traddr": "10.0.0.2", 00:16:03.289 "adrfam": "ipv4", 00:16:03.289 "trsvcid": "4420", 00:16:03.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.289 "hdgst": false, 00:16:03.289 "ddgst": false 00:16:03.289 }, 00:16:03.289 "method": "bdev_nvme_attach_controller" 00:16:03.289 }' 00:16:03.289 [2024-06-10 12:20:08.711756] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:03.289 [2024-06-10 12:20:08.711822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615566 ] 00:16:03.289 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.289 [2024-06-10 12:20:08.783865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.289 [2024-06-10 12:20:08.859648] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.550 Running I/O for 10 seconds... 00:16:13.555 00:16:13.555 Latency(us) 00:16:13.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.555 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:13.555 Verification LBA range: start 0x0 length 0x1000 00:16:13.555 Nvme1n1 : 10.01 8966.91 70.05 0.00 0.00 14223.28 2280.11 28398.93 00:16:13.555 =================================================================================================================== 00:16:13.555 Total : 8966.91 70.05 0.00 0.00 14223.28 2280.11 28398.93 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=617676 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:13.817 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:13.817 { 00:16:13.817 "params": { 00:16:13.817 "name": "Nvme$subsystem", 00:16:13.817 "trtype": "$TEST_TRANSPORT", 00:16:13.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.817 "adrfam": "ipv4", 00:16:13.817 "trsvcid": "$NVMF_PORT", 00:16:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.817 "hdgst": ${hdgst:-false}, 00:16:13.817 "ddgst": ${ddgst:-false} 00:16:13.817 }, 00:16:13.817 "method": "bdev_nvme_attach_controller" 00:16:13.818 } 00:16:13.818 EOF 00:16:13.818 )") 00:16:13.818 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:13.818 [2024-06-10 12:20:19.260650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.260681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:13.818 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:13.818 12:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:13.818 "params": { 00:16:13.818 "name": "Nvme1", 00:16:13.818 "trtype": "tcp", 00:16:13.818 "traddr": "10.0.0.2", 00:16:13.818 "adrfam": "ipv4", 00:16:13.818 "trsvcid": "4420", 00:16:13.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.818 "hdgst": false, 00:16:13.818 "ddgst": false 00:16:13.818 }, 00:16:13.818 "method": "bdev_nvme_attach_controller" 00:16:13.818 }' 00:16:13.818 [2024-06-10 12:20:19.272648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.272656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.284676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.284684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.296706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.296713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.299908] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:13.818 [2024-06-10 12:20:19.299954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617676 ] 00:16:13.818 [2024-06-10 12:20:19.308737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.308744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.320767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.320775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.818 [2024-06-10 12:20:19.332800] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.332807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.344830] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.344837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.356861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.356868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.364665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.818 [2024-06-10 12:20:19.368891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.368899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.380923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.380931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.392954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.392964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.404986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.404998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.818 [2024-06-10 12:20:19.417019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.818 [2024-06-10 12:20:19.417027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.428801] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.079 [2024-06-10 12:20:19.429049] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.429056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.441083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.441093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.453118] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.453131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.465146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.465154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.477177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.477187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.489213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.489221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.501256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.501270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.513273] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.513283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.525305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.525314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.537335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.537344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.549367] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.549377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.561403] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.561417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 Running I/O for 5 seconds... 00:16:14.079 [2024-06-10 12:20:19.573428] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.573435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.588463] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.588479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.601477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.601493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.614689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.614704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.627723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.627740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.640492] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.640516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.079 [2024-06-10 12:20:19.653291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.079 [2024-06-10 12:20:19.653306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.080 [2024-06-10 12:20:19.666297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.080 [2024-06-10 12:20:19.666313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.080 [2024-06-10 12:20:19.679259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.080 [2024-06-10 12:20:19.679273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.692143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.692158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.704559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.704573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.717601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.717615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.730769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.730783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.744216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.744230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.757711] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.757726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.771219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.771234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.784137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.784152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.341 [2024-06-10 12:20:19.797555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.341 [2024-06-10 12:20:19.797569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.810402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.810417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.822941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.822955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.836111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.836125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.848968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.848983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.862009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.862023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.875204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.875219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.888839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.888858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.901908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.901922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.915449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.915463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.928282] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.928297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.342 [2024-06-10 12:20:19.941792] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.342 [2024-06-10 12:20:19.941806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:19.955394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:19.955410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:19.968821] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:19.968836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:19.981399] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:19.981414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:19.994422] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:19.994436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.007507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.007524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.021115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.021130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.034515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.034532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.047619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.047634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.061245] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.061260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.074751] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.074766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.087750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.087765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.100850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.100864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.114238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.114253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.127068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.127083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.140540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.140558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.153887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.153901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.167650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.167665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.180742] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.180757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.193374] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.193389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.603 [2024-06-10 12:20:20.206540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.603 [2024-06-10 12:20:20.206555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.219565] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.219580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.232552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.232567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.245319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.245333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.258893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.258907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.271921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.271935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.284771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.284785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.297775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.297789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.311392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.311407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.325028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.325043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.337468] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.337482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.349920] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.349934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.362439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.362454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.375055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.375069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.387585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.387599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.400725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.400739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.413899] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.413913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.426715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.878 [2024-06-10 12:20:20.426730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.878 [2024-06-10 12:20:20.439480] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.879 [2024-06-10 12:20:20.439494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.879 [2024-06-10 12:20:20.452822] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.879 [2024-06-10 12:20:20.452836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.879 [2024-06-10 12:20:20.466246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.879 [2024-06-10 12:20:20.466260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.879 [2024-06-10 12:20:20.479478] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.879 [2024-06-10 12:20:20.479493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.492726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.492741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.505846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.505861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.518358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.518373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.531268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.531283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.544424] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.544438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.556844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.556858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.569404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.569419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.582390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.582404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.595682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.595696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.608975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.608989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.621992] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.622006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.635190] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.635209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.648285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.648299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.661669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.661683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.674966] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.674980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.687679] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.687693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.701244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.701259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.714621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.714636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.728119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.728134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.139 [2024-06-10 12:20:20.741506] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.139 [2024-06-10 12:20:20.741521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.754573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.754588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.767619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.767633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.780635] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.780649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.793676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.793691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.807168] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.807183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.819946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.819960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.833328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.833343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.846650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.846666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.859625] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.859640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.872408] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.872422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.885061] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.885075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.897852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.897868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.911401] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.911416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.924190] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.924209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.936683] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.936698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.949647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.949662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.962923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.400 [2024-06-10 12:20:20.962937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.400 [2024-06-10 12:20:20.976198] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.401 [2024-06-10 12:20:20.976213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.401 [2024-06-10 12:20:20.989241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.401 [2024-06-10 12:20:20.989256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.401 [2024-06-10 12:20:21.002456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.401 [2024-06-10 12:20:21.002470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.015240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.015255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.028598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.028613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.041780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.041794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.055322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.055337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.068623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.068638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.081650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.081665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.094021] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.094036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.107094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.107109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.120273] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.120292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.132954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.132969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.145337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.145352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.158791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.158805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.172343] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.172358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.185741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.185755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.198128] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.198143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.210686] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.210700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.223162] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.223177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.236403] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.236418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.249757] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.249772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.661 [2024-06-10 12:20:21.262561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.661 [2024-06-10 12:20:21.262576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.275313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.275328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.287977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.287991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.300618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.300633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.313578] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.313593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.326790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.326804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.340047] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.340061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.353487] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.353501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.366699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.366718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.380029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.380044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.393030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.393045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.406559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.406574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.419564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.419578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.432880] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.432895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.446242] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.446256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.458765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.458780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.471651] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.471665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.484569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.484583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.497820] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.497834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.510710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.510725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.922 [2024-06-10 12:20:21.523689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.922 [2024-06-10 12:20:21.523704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.536600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.536615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.549097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.549111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.561538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.561552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.574266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.574281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.587336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.587350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.600621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.600636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.613767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.613785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.625982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.625996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.638007] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.638021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.651469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.651483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.664969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.664983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.678491] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.678506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.691684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.691699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.704881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.704895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.182 [2024-06-10 12:20:21.718124] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.182 [2024-06-10 12:20:21.718138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.183 [2024-06-10 12:20:21.730962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.183 [2024-06-10 12:20:21.730977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.183 [2024-06-10 12:20:21.744277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.183 [2024-06-10 12:20:21.744291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.183 [2024-06-10 12:20:21.757645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.183 [2024-06-10 12:20:21.757659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.183 [2024-06-10 12:20:21.770904] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.183 [2024-06-10 12:20:21.770919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.183 [2024-06-10 12:20:21.784243] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.183 [2024-06-10 12:20:21.784258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.797226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.797240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.809972] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.809987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.823245] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.823259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.836512] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.836526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.849446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.849460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.862936] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.862954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.875682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.875697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.888954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.888968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.902409] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.902423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.915134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.915148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.928339] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.928353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.941502] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.941517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.954456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.954470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.967536] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.967551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.981183] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.981201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:21.993868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:21.993882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:22.007374] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:22.007389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:22.020873] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:22.020888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:22.033883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:22.033897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.444 [2024-06-10 12:20:22.046910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.444 [2024-06-10 12:20:22.046924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.060158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.060172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.073424] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.073438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.086595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.086609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.099761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.099776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.112579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.112598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.126004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.126019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.139225] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.139240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.152621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.152635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.165260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.165274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.177977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.177992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.190615] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.190629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.203372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.203386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.216757] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.216772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.229556] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.229571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.242152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.242166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.255212] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.255227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.268271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.268285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.280910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.280924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.294127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.294141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.710 [2024-06-10 12:20:22.306656] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.710 [2024-06-10 12:20:22.306671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.320137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.320152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.333690] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.333704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.346833] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.346848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.359948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.359962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.373217] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.373232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.386410] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.386424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.970 [2024-06-10 12:20:22.399153] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.970 [2024-06-10 12:20:22.399168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.412466] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.412480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.426010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.426025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.438864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.438879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.451719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.451734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.464642] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.464657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.477943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.477959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.490524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.490538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.503857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.503871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.517225] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.517240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.530298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.530313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.543652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.543667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.556460] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.556475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.971 [2024-06-10 12:20:22.569685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.971 [2024-06-10 12:20:22.569700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.582905] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.582920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.596120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.596135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.608809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.608824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.621780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.621794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.635340] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.635355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.647724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.647738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.660790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.660805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.673937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.673951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.686732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.686747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.698983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.698998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.711997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.712012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.725113] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.725128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.737734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.737748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.750891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.750906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.764045] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.764059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.777162] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.777176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.790560] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.790575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.803859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.803874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.816599] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.816615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.231 [2024-06-10 12:20:22.829135] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.231 [2024-06-10 12:20:22.829150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.842226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.842241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.855470] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.855485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.868666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.868680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.881238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.881253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.894946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.894961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.907901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.907916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.920995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.921010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.933605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.933619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.946227] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.946242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.959431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.959446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.972469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.972485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.985453] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.985468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:22.998426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:22.998441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.011718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.011733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.025036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.025051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.037462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.037476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.050684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.050699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.063026] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.063041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.075392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.075407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.493 [2024-06-10 12:20:23.087784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.493 [2024-06-10 12:20:23.087802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.100802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.754 [2024-06-10 12:20:23.100817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.113965] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.754 [2024-06-10 12:20:23.113979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.126085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.754 [2024-06-10 12:20:23.126101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.139446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.754 [2024-06-10 12:20:23.139461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.152085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.754 [2024-06-10 12:20:23.152099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.754 [2024-06-10 12:20:23.164781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.164794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.178145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.178160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.191252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.191267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.204513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.204527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.217720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.217734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.231066] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.231080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.243808] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.243822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.256895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.256909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.269415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.269429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.282036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.282051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.294966] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.294980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.307928] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.307942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.320960] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.320974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.333526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.333547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.346471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.346485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.755 [2024-06-10 12:20:23.358999] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.755 [2024-06-10 12:20:23.359013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.372017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.372032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.385228] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.385244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.398312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.398326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.411507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.411521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.424107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.424121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.016 [2024-06-10 12:20:23.437516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.016 [2024-06-10 12:20:23.437530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.450890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.450904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.463556] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.463569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.476747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.476762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.490307] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.490322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.503127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.503141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.516275] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.516289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.529599] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.529613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.542204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.542218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.555684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.555699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.568942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.568957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.582147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.582166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.594843] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.594858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.608083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.608097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.017 [2024-06-10 12:20:23.621126] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.017 [2024-06-10 12:20:23.621140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.634200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.634216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.647305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.647319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.660877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.660891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.673271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.673285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.685828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.685841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.699406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.699420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.712880] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.712894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.726126] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.726141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.739913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.739927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.752828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.752842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.765950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.765964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.778779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.778793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.791358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.791373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.804789] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.804803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.817099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.817114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.829619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.829637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.842839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.842854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.855369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.855383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.868439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.868453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.278 [2024-06-10 12:20:23.881849] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.278 [2024-06-10 12:20:23.881863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.894768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.894782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.908026] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.908040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.921054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.921068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.933827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.933841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.947082] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.947096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.960553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.960567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.972731] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.972745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.985899] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.985913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:23.999096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:23.999110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.011848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.011862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.024911] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.024925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.038095] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.038109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.051344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.051358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.064962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.064976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.077926] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.077943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.091151] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.091166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.104255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.104269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.117551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.117565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.130597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.130611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.539 [2024-06-10 12:20:24.143766] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.539 [2024-06-10 12:20:24.143781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.156954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.156969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.169467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.169482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.182595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.182609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.195538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.195553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.208569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.208584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.221402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.221417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.234386] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.234401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.247286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.247301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.260200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.260215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.273216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.273230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.286361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.286376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.299636] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.299651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.312990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.313004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.325736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.325750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.338379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.338394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.350752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.350766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.364083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.364098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.377123] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.377138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.390390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.390405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.801 [2024-06-10 12:20:24.403617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.801 [2024-06-10 12:20:24.403631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.416533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.416547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.428841] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.428856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.441532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.441547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.454439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.454454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.466907] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.466922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.480186] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.480207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.493366] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.493381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.506703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.506719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.519680] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.519695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.532847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.532861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.062 [2024-06-10 12:20:24.545525] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.062 [2024-06-10 12:20:24.545540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.558634] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.558649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.572271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.572286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.585168] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.585183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 00:16:19.063 Latency(us) 00:16:19.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.063 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:19.063 Nvme1n1 : 5.01 19543.28 152.68 0.00 0.00 6542.40 2717.01 16384.00 00:16:19.063 =================================================================================================================== 00:16:19.063 Total : 19543.28 152.68 0.00 0.00 6542.40 2717.01 16384.00 00:16:19.063 [2024-06-10 12:20:24.594165] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.594179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.606184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.606201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.618220] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.618233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.630251] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.630262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.642278] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.642288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.654307] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.654317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.063 [2024-06-10 12:20:24.666337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.063 [2024-06-10 12:20:24.666345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.324 [2024-06-10 12:20:24.678370] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.324 [2024-06-10 12:20:24.678382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.324 [2024-06-10 12:20:24.690400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.324 [2024-06-10 12:20:24.690409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.324 [2024-06-10 12:20:24.702432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.324 [2024-06-10 12:20:24.702445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.324 [2024-06-10 12:20:24.714458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.324 [2024-06-10 12:20:24.714466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (617676) - No such process 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 617676 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.324 delay0 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.324 12:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:19.324 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.324 [2024-06-10 12:20:24.892377] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:25.907 Initializing NVMe Controllers 00:16:25.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:25.907 Initialization complete. Launching workers. 00:16:25.907 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2242 00:16:25.907 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2528, failed to submit 34 00:16:25.907 success 2356, unsuccess 172, failed 0 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.907 rmmod nvme_tcp 00:16:25.907 rmmod nvme_fabrics 00:16:25.907 rmmod nvme_keyring 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 615528 ']' 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 615528 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 615528 ']' 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 615528 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 615528 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 615528' 00:16:25.907 killing process with pid 615528 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 615528 00:16:25.907 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 615528 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.168 12:20:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.077 12:20:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.077 00:16:28.077 real 0m34.216s 00:16:28.077 user 0m45.458s 00:16:28.077 sys 0m10.704s 00:16:28.077 12:20:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:28.077 12:20:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.077 ************************************ 00:16:28.077 END TEST nvmf_zcopy 00:16:28.077 ************************************ 00:16:28.339 12:20:33 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:28.339 12:20:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:28.339 12:20:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:28.339 12:20:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.339 ************************************ 00:16:28.339 START TEST nvmf_nmic 00:16:28.339 ************************************ 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:28.339 * Looking for test storage... 00:16:28.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.339 12:20:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:36.482 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:36.482 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:36.482 Found net devices under 0000:31:00.0: cvl_0_0 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:36.482 Found net devices under 0000:31:00.1: cvl_0_1 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:16:36.482 00:16:36.482 --- 10.0.0.2 ping statistics --- 00:16:36.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.482 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:16:36.482 00:16:36.482 --- 10.0.0.1 ping statistics --- 00:16:36.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.482 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=624588 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 624588 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 624588 ']' 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:36.482 12:20:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.482 [2024-06-10 12:20:41.615261] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:36.483 [2024-06-10 12:20:41.615307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.483 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.483 [2024-06-10 12:20:41.691927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.483 [2024-06-10 12:20:41.759894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.483 [2024-06-10 12:20:41.759931] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.483 [2024-06-10 12:20:41.759939] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.483 [2024-06-10 12:20:41.759945] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.483 [2024-06-10 12:20:41.759950] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.483 [2024-06-10 12:20:41.762211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.483 [2024-06-10 12:20:41.762282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.483 [2024-06-10 12:20:41.762539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.483 [2024-06-10 12:20:41.762541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.065 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:37.065 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:37.065 12:20:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.065 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 [2024-06-10 12:20:42.432756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 Malloc0 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 [2024-06-10 12:20:42.491963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:37.066 test case1: single bdev can't be used in multiple subsystems 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 [2024-06-10 12:20:42.527907] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:37.066 [2024-06-10 12:20:42.527925] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:37.066 [2024-06-10 12:20:42.527933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.066 request: 00:16:37.066 { 00:16:37.066 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:37.066 "namespace": { 00:16:37.066 "bdev_name": "Malloc0", 00:16:37.066 "no_auto_visible": false 00:16:37.066 }, 00:16:37.066 "method": "nvmf_subsystem_add_ns", 00:16:37.066 "req_id": 1 00:16:37.066 } 00:16:37.066 Got JSON-RPC error response 00:16:37.066 response: 00:16:37.066 { 00:16:37.066 "code": -32602, 00:16:37.066 "message": "Invalid parameters" 00:16:37.066 } 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:37.066 Adding namespace failed - expected result. 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:37.066 test case2: host connect to nvmf target in multiple paths 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:37.066 [2024-06-10 12:20:42.540011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.066 12:20:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.982 12:20:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:40.367 12:20:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.367 12:20:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:40.367 12:20:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.367 12:20:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:40.367 12:20:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:42.299 12:20:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:42.299 [global] 00:16:42.299 thread=1 00:16:42.299 invalidate=1 00:16:42.299 rw=write 00:16:42.299 time_based=1 00:16:42.299 runtime=1 00:16:42.299 ioengine=libaio 00:16:42.299 direct=1 00:16:42.299 bs=4096 00:16:42.299 iodepth=1 00:16:42.299 norandommap=0 00:16:42.299 numjobs=1 00:16:42.299 00:16:42.299 verify_dump=1 00:16:42.299 verify_backlog=512 00:16:42.299 verify_state_save=0 00:16:42.299 do_verify=1 00:16:42.299 verify=crc32c-intel 00:16:42.299 [job0] 00:16:42.299 filename=/dev/nvme0n1 00:16:42.299 Could not set queue depth (nvme0n1) 00:16:42.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.608 fio-3.35 00:16:42.608 Starting 1 thread 00:16:43.678 00:16:43.678 job0: (groupid=0, jobs=1): err= 0: pid=626126: Mon Jun 10 12:20:49 2024 00:16:43.678 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:43.678 slat (nsec): min=6475, max=61075, avg=26677.59, stdev=2759.06 00:16:43.678 clat (usec): min=271, max=1215, avg=912.98, stdev=159.89 00:16:43.678 lat (usec): min=298, max=1242, avg=939.66, stdev=159.97 00:16:43.678 clat percentiles (usec): 00:16:43.678 | 1.00th=[ 420], 5.00th=[ 586], 10.00th=[ 701], 20.00th=[ 766], 00:16:43.678 | 30.00th=[ 889], 40.00th=[ 947], 50.00th=[ 988], 60.00th=[ 996], 00:16:43.678 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1045], 95.00th=[ 1074], 00:16:43.678 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:16:43.678 | 99.99th=[ 1221] 00:16:43.678 write: IOPS=799, BW=3197KiB/s (3274kB/s)(3200KiB/1001msec); 0 zone resets 00:16:43.678 slat (usec): min=8, max=28354, avg=66.60, stdev=1001.43 00:16:43.678 clat (usec): min=164, max=1260, avg=569.33, stdev=178.52 00:16:43.678 lat (usec): min=189, max=28749, avg=635.93, stdev=1011.71 00:16:43.678 clat percentiles (usec): 00:16:43.678 | 1.00th=[ 235], 5.00th=[ 269], 10.00th=[ 314], 20.00th=[ 416], 00:16:43.679 | 30.00th=[ 453], 40.00th=[ 523], 50.00th=[ 570], 60.00th=[ 627], 00:16:43.679 | 70.00th=[ 693], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 832], 00:16:43.679 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 1254], 99.95th=[ 1254], 00:16:43.679 | 99.99th=[ 1254] 00:16:43.679 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.679 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.679 lat (usec) : 250=0.99%, 500=22.64%, 750=32.47%, 1000=28.96% 00:16:43.679 lat (msec) : 2=14.94% 00:16:43.679 cpu : usr=2.60%, sys=5.30%, ctx=1315, majf=0, minf=1 00:16:43.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.679 issued rwts: total=512,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.679 00:16:43.679 Run status group 0 (all jobs): 00:16:43.679 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:43.679 WRITE: bw=3197KiB/s (3274kB/s), 3197KiB/s-3197KiB/s (3274kB/s-3274kB/s), io=3200KiB (3277kB), run=1001-1001msec 00:16:43.679 00:16:43.679 Disk stats (read/write): 00:16:43.679 nvme0n1: ios=537/631, merge=0/0, ticks=1416/297, in_queue=1713, util=99.00% 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.679 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.679 rmmod nvme_tcp 00:16:43.679 rmmod nvme_fabrics 00:16:43.679 rmmod nvme_keyring 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 624588 ']' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 624588 ']' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 624588' 00:16:43.940 killing process with pid 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 624588 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.940 12:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.486 12:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.486 00:16:46.486 real 0m17.842s 00:16:46.486 user 0m45.164s 00:16:46.486 sys 0m6.516s 00:16:46.486 12:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:46.486 12:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.486 ************************************ 00:16:46.486 END TEST nvmf_nmic 00:16:46.486 ************************************ 00:16:46.486 12:20:51 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:46.486 12:20:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:46.486 12:20:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:46.486 12:20:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.486 ************************************ 00:16:46.486 START TEST nvmf_fio_target 00:16:46.486 ************************************ 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:46.486 * Looking for test storage... 00:16:46.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.486 12:20:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:54.632 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.632 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:54.633 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:54.633 Found net devices under 0000:31:00.0: cvl_0_0 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:54.633 Found net devices under 0000:31:00.1: cvl_0_1 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:16:54.633 00:16:54.633 --- 10.0.0.2 ping statistics --- 00:16:54.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.633 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:16:54.633 00:16:54.633 --- 10.0.0.1 ping statistics --- 00:16:54.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.633 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.633 12:20:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=631143 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 631143 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 631143 ']' 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:54.633 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 [2024-06-10 12:21:00.068293] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:54.633 [2024-06-10 12:21:00.068364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.633 [2024-06-10 12:21:00.147899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.633 [2024-06-10 12:21:00.224658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.633 [2024-06-10 12:21:00.224701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.634 [2024-06-10 12:21:00.224708] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.634 [2024-06-10 12:21:00.224715] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.634 [2024-06-10 12:21:00.224720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.634 [2024-06-10 12:21:00.224887] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.634 [2024-06-10 12:21:00.225002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.634 [2024-06-10 12:21:00.225160] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.634 [2024-06-10 12:21:00.225161] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.578 12:21:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:55.578 [2024-06-10 12:21:01.034160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.578 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.840 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:55.840 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.840 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:55.840 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:56.101 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:56.101 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:56.363 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:56.363 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:56.363 12:21:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:56.624 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:56.624 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:56.885 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:56.885 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:56.885 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:56.886 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:57.147 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:57.407 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:57.407 12:21:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.668 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:57.668 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.668 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.930 [2024-06-10 12:21:03.331673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.930 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:57.930 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:58.191 12:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:16:59.578 12:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:17:02.125 12:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:02.125 [global] 00:17:02.125 thread=1 00:17:02.125 invalidate=1 00:17:02.125 rw=write 00:17:02.125 time_based=1 00:17:02.125 runtime=1 00:17:02.125 ioengine=libaio 00:17:02.125 direct=1 00:17:02.125 bs=4096 00:17:02.125 iodepth=1 00:17:02.125 norandommap=0 00:17:02.125 numjobs=1 00:17:02.125 00:17:02.125 verify_dump=1 00:17:02.125 verify_backlog=512 00:17:02.125 verify_state_save=0 00:17:02.125 do_verify=1 00:17:02.125 verify=crc32c-intel 00:17:02.125 [job0] 00:17:02.125 filename=/dev/nvme0n1 00:17:02.125 [job1] 00:17:02.125 filename=/dev/nvme0n2 00:17:02.125 [job2] 00:17:02.125 filename=/dev/nvme0n3 00:17:02.125 [job3] 00:17:02.125 filename=/dev/nvme0n4 00:17:02.125 Could not set queue depth (nvme0n1) 00:17:02.125 Could not set queue depth (nvme0n2) 00:17:02.125 Could not set queue depth (nvme0n3) 00:17:02.125 Could not set queue depth (nvme0n4) 00:17:02.125 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.125 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.125 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.125 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.125 fio-3.35 00:17:02.125 Starting 4 threads 00:17:03.512 00:17:03.512 job0: (groupid=0, jobs=1): err= 0: pid=632859: Mon Jun 10 12:21:08 2024 00:17:03.512 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:03.512 slat (nsec): min=6555, max=59317, avg=26379.81, stdev=3573.00 00:17:03.512 clat (usec): min=756, max=1537, avg=1169.07, stdev=88.92 00:17:03.512 lat (usec): min=783, max=1563, avg=1195.45, stdev=89.36 00:17:03.512 clat percentiles (usec): 00:17:03.512 | 1.00th=[ 906], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:03.512 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1205], 00:17:03.512 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:17:03.512 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1532], 99.95th=[ 1532], 00:17:03.512 | 99.99th=[ 1532] 00:17:03.512 write: IOPS=557, BW=2230KiB/s (2283kB/s)(2232KiB/1001msec); 0 zone resets 00:17:03.512 slat (nsec): min=8876, max=67458, avg=29406.35, stdev=9592.77 00:17:03.512 clat (usec): min=165, max=907, avg=650.32, stdev=114.52 00:17:03.512 lat (usec): min=175, max=939, avg=679.73, stdev=119.40 00:17:03.512 clat percentiles (usec): 00:17:03.512 | 1.00th=[ 371], 5.00th=[ 457], 10.00th=[ 494], 20.00th=[ 545], 00:17:03.512 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 701], 00:17:03.512 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 807], 00:17:03.512 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:17:03.512 | 99.99th=[ 906] 00:17:03.512 bw ( KiB/s): min= 4096, max= 4096, per=45.99%, avg=4096.00, stdev= 0.00, samples=1 00:17:03.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:03.512 lat (usec) : 250=0.19%, 500=5.89%, 750=35.70%, 1000=12.62% 00:17:03.512 lat (msec) : 2=45.61% 00:17:03.512 cpu : usr=2.10%, sys=4.20%, ctx=1071, majf=0, minf=1 00:17:03.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.512 issued rwts: total=512,558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.512 job1: (groupid=0, jobs=1): err= 0: pid=632860: Mon Jun 10 12:21:08 2024 00:17:03.512 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:17:03.512 slat (nsec): min=25055, max=26039, avg=25353.59, stdev=239.90 00:17:03.512 clat (usec): min=1128, max=42080, avg=39505.23, stdev=9892.06 00:17:03.512 lat (usec): min=1153, max=42105, avg=39530.58, stdev=9892.09 00:17:03.512 clat percentiles (usec): 00:17:03.512 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:17:03.512 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:03.512 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:03.512 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:03.512 | 99.99th=[42206] 00:17:03.512 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:17:03.512 slat (nsec): min=8629, max=61294, avg=28188.72, stdev=9598.65 00:17:03.512 clat (usec): min=311, max=942, avg=650.53, stdev=115.70 00:17:03.512 lat (usec): min=322, max=973, avg=678.72, stdev=120.92 00:17:03.512 clat percentiles (usec): 00:17:03.512 | 1.00th=[ 363], 5.00th=[ 429], 10.00th=[ 494], 20.00th=[ 553], 00:17:03.512 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:17:03.513 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 807], 00:17:03.513 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:17:03.513 | 99.99th=[ 947] 00:17:03.513 bw ( KiB/s): min= 4096, max= 4096, per=45.99%, avg=4096.00, stdev= 0.00, samples=1 00:17:03.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:03.513 lat (usec) : 500=10.59%, 750=65.60%, 1000=20.60% 00:17:03.513 lat (msec) : 2=0.19%, 50=3.02% 00:17:03.513 cpu : usr=1.37%, sys=1.47%, ctx=529, majf=0, minf=1 00:17:03.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.513 job2: (groupid=0, jobs=1): err= 0: pid=632861: Mon Jun 10 12:21:08 2024 00:17:03.513 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:03.513 slat (nsec): min=7528, max=63883, avg=27381.51, stdev=3168.10 00:17:03.513 clat (usec): min=557, max=1294, avg=1038.40, stdev=104.46 00:17:03.513 lat (usec): min=584, max=1321, avg=1065.78, stdev=104.69 00:17:03.513 clat percentiles (usec): 00:17:03.513 | 1.00th=[ 701], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 963], 00:17:03.513 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:17:03.513 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:17:03.513 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:17:03.513 | 99.99th=[ 1303] 00:17:03.513 write: IOPS=695, BW=2781KiB/s (2848kB/s)(2784KiB/1001msec); 0 zone resets 00:17:03.513 slat (nsec): min=8976, max=68678, avg=31630.92, stdev=10174.58 00:17:03.513 clat (usec): min=197, max=985, avg=607.28, stdev=136.40 00:17:03.513 lat (usec): min=207, max=1020, avg=638.91, stdev=140.72 00:17:03.513 clat percentiles (usec): 00:17:03.513 | 1.00th=[ 281], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 486], 00:17:03.513 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 652], 00:17:03.513 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:17:03.513 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 988], 00:17:03.513 | 99.99th=[ 988] 00:17:03.513 bw ( KiB/s): min= 4096, max= 4096, per=45.99%, avg=4096.00, stdev= 0.00, samples=1 00:17:03.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:03.513 lat (usec) : 250=0.17%, 500=12.75%, 750=37.83%, 1000=19.95% 00:17:03.513 lat (msec) : 2=29.30% 00:17:03.513 cpu : usr=2.10%, sys=5.20%, ctx=1209, majf=0, minf=1 00:17:03.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 issued rwts: total=512,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.513 job3: (groupid=0, jobs=1): err= 0: pid=632862: Mon Jun 10 12:21:08 2024 00:17:03.513 read: IOPS=429, BW=1718KiB/s (1760kB/s)(1720KiB/1001msec) 00:17:03.513 slat (nsec): min=6974, max=44153, avg=24200.41, stdev=6352.03 00:17:03.513 clat (usec): min=572, max=41319, avg=1692.35, stdev=5755.36 00:17:03.513 lat (usec): min=579, max=41345, avg=1716.55, stdev=5755.60 00:17:03.513 clat percentiles (usec): 00:17:03.513 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 807], 00:17:03.513 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 873], 00:17:03.513 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 930], 95.00th=[ 971], 00:17:03.513 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:03.513 | 99.99th=[41157] 00:17:03.513 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:03.513 slat (nsec): min=9843, max=51255, avg=29114.93, stdev=9697.67 00:17:03.513 clat (usec): min=258, max=692, avg=468.75, stdev=75.17 00:17:03.513 lat (usec): min=277, max=726, avg=497.86, stdev=78.59 00:17:03.513 clat percentiles (usec): 00:17:03.513 | 1.00th=[ 289], 5.00th=[ 334], 10.00th=[ 367], 20.00th=[ 396], 00:17:03.513 | 30.00th=[ 429], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 494], 00:17:03.513 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 586], 00:17:03.513 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 693], 00:17:03.513 | 99.99th=[ 693] 00:17:03.513 bw ( KiB/s): min= 4096, max= 4096, per=45.99%, avg=4096.00, stdev= 0.00, samples=1 00:17:03.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:03.513 lat (usec) : 500=33.86%, 750=23.78%, 1000=40.87% 00:17:03.513 lat (msec) : 2=0.53%, 50=0.96% 00:17:03.513 cpu : usr=1.20%, sys=2.80%, ctx=943, majf=0, minf=1 00:17:03.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.513 issued rwts: total=430,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.513 00:17:03.513 Run status group 0 (all jobs): 00:17:03.513 READ: bw=5752KiB/s (5890kB/s), 66.5KiB/s-2046KiB/s (68.1kB/s-2095kB/s), io=5884KiB (6025kB), run=1001-1023msec 00:17:03.513 WRITE: bw=8907KiB/s (9121kB/s), 2002KiB/s-2781KiB/s (2050kB/s-2848kB/s), io=9112KiB (9331kB), run=1001-1023msec 00:17:03.513 00:17:03.513 Disk stats (read/write): 00:17:03.513 nvme0n1: ios=452/512, merge=0/0, ticks=811/273, in_queue=1084, util=91.18% 00:17:03.513 nvme0n2: ios=50/512, merge=0/0, ticks=524/271, in_queue=795, util=88.05% 00:17:03.513 nvme0n3: ios=520/512, merge=0/0, ticks=1386/243, in_queue=1629, util=97.15% 00:17:03.513 nvme0n4: ios=301/512, merge=0/0, ticks=1508/228, in_queue=1736, util=97.22% 00:17:03.513 12:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:03.513 [global] 00:17:03.513 thread=1 00:17:03.513 invalidate=1 00:17:03.513 rw=randwrite 00:17:03.513 time_based=1 00:17:03.513 runtime=1 00:17:03.513 ioengine=libaio 00:17:03.513 direct=1 00:17:03.513 bs=4096 00:17:03.513 iodepth=1 00:17:03.513 norandommap=0 00:17:03.513 numjobs=1 00:17:03.513 00:17:03.513 verify_dump=1 00:17:03.513 verify_backlog=512 00:17:03.513 verify_state_save=0 00:17:03.513 do_verify=1 00:17:03.513 verify=crc32c-intel 00:17:03.513 [job0] 00:17:03.513 filename=/dev/nvme0n1 00:17:03.513 [job1] 00:17:03.513 filename=/dev/nvme0n2 00:17:03.513 [job2] 00:17:03.513 filename=/dev/nvme0n3 00:17:03.513 [job3] 00:17:03.513 filename=/dev/nvme0n4 00:17:03.513 Could not set queue depth (nvme0n1) 00:17:03.513 Could not set queue depth (nvme0n2) 00:17:03.513 Could not set queue depth (nvme0n3) 00:17:03.513 Could not set queue depth (nvme0n4) 00:17:03.775 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.775 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.775 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.775 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.775 fio-3.35 00:17:03.775 Starting 4 threads 00:17:05.160 00:17:05.160 job0: (groupid=0, jobs=1): err= 0: pid=633553: Mon Jun 10 12:21:10 2024 00:17:05.160 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:05.160 slat (nsec): min=6908, max=47707, avg=23496.75, stdev=6959.54 00:17:05.160 clat (usec): min=450, max=1379, avg=843.78, stdev=137.22 00:17:05.160 lat (usec): min=477, max=1386, avg=867.28, stdev=139.16 00:17:05.160 clat percentiles (usec): 00:17:05.160 | 1.00th=[ 510], 5.00th=[ 603], 10.00th=[ 693], 20.00th=[ 742], 00:17:05.160 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 832], 60.00th=[ 857], 00:17:05.160 | 70.00th=[ 889], 80.00th=[ 963], 90.00th=[ 1037], 95.00th=[ 1074], 00:17:05.160 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1385], 99.95th=[ 1385], 00:17:05.160 | 99.99th=[ 1385] 00:17:05.160 write: IOPS=962, BW=3848KiB/s (3941kB/s)(3852KiB/1001msec); 0 zone resets 00:17:05.160 slat (nsec): min=3057, max=51564, avg=28652.22, stdev=8935.34 00:17:05.160 clat (usec): min=239, max=928, avg=536.94, stdev=126.18 00:17:05.160 lat (usec): min=249, max=933, avg=565.59, stdev=128.77 00:17:05.160 clat percentiles (usec): 00:17:05.160 | 1.00th=[ 277], 5.00th=[ 343], 10.00th=[ 379], 20.00th=[ 416], 00:17:05.160 | 30.00th=[ 465], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 570], 00:17:05.160 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 742], 00:17:05.160 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 930], 00:17:05.160 | 99.99th=[ 930] 00:17:05.160 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.160 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.160 lat (usec) : 250=0.14%, 500=25.90%, 750=43.86%, 1000=24.07% 00:17:05.160 lat (msec) : 2=6.03% 00:17:05.160 cpu : usr=2.00%, sys=4.30%, ctx=1479, majf=0, minf=1 00:17:05.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.160 issued rwts: total=512,963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.160 job1: (groupid=0, jobs=1): err= 0: pid=633567: Mon Jun 10 12:21:10 2024 00:17:05.160 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:17:05.160 slat (nsec): min=26216, max=32012, avg=27088.47, stdev=1238.53 00:17:05.160 clat (usec): min=40802, max=41798, avg=41071.21, stdev=260.88 00:17:05.160 lat (usec): min=40829, max=41825, avg=41098.30, stdev=260.95 00:17:05.160 clat percentiles (usec): 00:17:05.160 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:17:05.160 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:05.160 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:17:05.161 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:05.161 | 99.99th=[41681] 00:17:05.161 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:05.161 slat (nsec): min=3569, max=62445, avg=27819.55, stdev=11255.01 00:17:05.161 clat (usec): min=133, max=740, avg=463.19, stdev=123.95 00:17:05.161 lat (usec): min=142, max=774, avg=491.01, stdev=128.49 00:17:05.161 clat percentiles (usec): 00:17:05.161 | 1.00th=[ 167], 5.00th=[ 265], 10.00th=[ 302], 20.00th=[ 355], 00:17:05.161 | 30.00th=[ 383], 40.00th=[ 433], 50.00th=[ 469], 60.00th=[ 502], 00:17:05.161 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 635], 95.00th=[ 668], 00:17:05.161 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 742], 00:17:05.161 | 99.99th=[ 742] 00:17:05.161 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.161 lat (usec) : 250=3.95%, 500=53.48%, 750=38.98% 00:17:05.161 lat (msec) : 50=3.58% 00:17:05.161 cpu : usr=0.87%, sys=1.93%, ctx=533, majf=0, minf=1 00:17:05.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.161 job2: (groupid=0, jobs=1): err= 0: pid=633572: Mon Jun 10 12:21:10 2024 00:17:05.161 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:05.161 slat (nsec): min=4588, max=62070, avg=24008.62, stdev=6579.08 00:17:05.161 clat (usec): min=411, max=1889, avg=1001.34, stdev=159.24 00:17:05.161 lat (usec): min=437, max=1914, avg=1025.34, stdev=162.61 00:17:05.161 clat percentiles (usec): 00:17:05.161 | 1.00th=[ 603], 5.00th=[ 693], 10.00th=[ 750], 20.00th=[ 865], 00:17:05.161 | 30.00th=[ 963], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:17:05.161 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:17:05.161 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1893], 99.95th=[ 1893], 00:17:05.161 | 99.99th=[ 1893] 00:17:05.161 write: IOPS=839, BW=3357KiB/s (3437kB/s)(3360KiB/1001msec); 0 zone resets 00:17:05.161 slat (nsec): min=9439, max=64971, avg=29896.71, stdev=8283.01 00:17:05.161 clat (usec): min=170, max=857, avg=523.14, stdev=131.72 00:17:05.161 lat (usec): min=202, max=889, avg=553.04, stdev=134.44 00:17:05.161 clat percentiles (usec): 00:17:05.161 | 1.00th=[ 255], 5.00th=[ 306], 10.00th=[ 351], 20.00th=[ 388], 00:17:05.161 | 30.00th=[ 445], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 570], 00:17:05.161 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 725], 00:17:05.161 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 857], 99.95th=[ 857], 00:17:05.161 | 99.99th=[ 857] 00:17:05.161 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.161 lat (usec) : 250=0.59%, 500=26.85%, 750=37.13%, 1000=10.80% 00:17:05.161 lat (msec) : 2=24.63% 00:17:05.161 cpu : usr=1.40%, sys=4.50%, ctx=1353, majf=0, minf=1 00:17:05.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 issued rwts: total=512,840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.161 job3: (groupid=0, jobs=1): err= 0: pid=633573: Mon Jun 10 12:21:10 2024 00:17:05.161 read: IOPS=25, BW=103KiB/s (105kB/s)(104KiB/1010msec) 00:17:05.161 slat (nsec): min=6926, max=31233, avg=24601.54, stdev=7327.01 00:17:05.161 clat (usec): min=612, max=42206, avg=26070.00, stdev=20329.36 00:17:05.161 lat (usec): min=642, max=42235, avg=26094.60, stdev=20332.16 00:17:05.161 clat percentiles (usec): 00:17:05.161 | 1.00th=[ 611], 5.00th=[ 660], 10.00th=[ 791], 20.00th=[ 922], 00:17:05.161 | 30.00th=[ 947], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:05.161 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:05.161 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:05.161 | 99.99th=[42206] 00:17:05.161 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:17:05.161 slat (nsec): min=3228, max=52879, avg=28162.19, stdev=10051.43 00:17:05.161 clat (usec): min=194, max=965, avg=609.93, stdev=131.88 00:17:05.161 lat (usec): min=198, max=997, avg=638.09, stdev=136.51 00:17:05.161 clat percentiles (usec): 00:17:05.161 | 1.00th=[ 310], 5.00th=[ 392], 10.00th=[ 420], 20.00th=[ 494], 00:17:05.161 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:17:05.161 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 816], 00:17:05.161 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 963], 00:17:05.161 | 99.99th=[ 963] 00:17:05.161 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.161 lat (usec) : 250=0.37%, 500=19.14%, 750=62.27%, 1000=15.24% 00:17:05.161 lat (msec) : 50=2.97% 00:17:05.161 cpu : usr=1.19%, sys=1.29%, ctx=539, majf=0, minf=1 00:17:05.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.161 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.161 00:17:05.161 Run status group 0 (all jobs): 00:17:05.161 READ: bw=4123KiB/s (4222kB/s), 73.3KiB/s-2046KiB/s (75.0kB/s-2095kB/s), io=4276KiB (4379kB), run=1001-1037msec 00:17:05.161 WRITE: bw=10.6MiB/s (11.2MB/s), 1975KiB/s-3848KiB/s (2022kB/s-3941kB/s), io=11.0MiB (11.6MB), run=1001-1037msec 00:17:05.161 00:17:05.161 Disk stats (read/write): 00:17:05.161 nvme0n1: ios=564/714, merge=0/0, ticks=726/360, in_queue=1086, util=84.57% 00:17:05.161 nvme0n2: ios=60/512, merge=0/0, ticks=668/174, in_queue=842, util=91.23% 00:17:05.161 nvme0n3: ios=566/589, merge=0/0, ticks=609/277, in_queue=886, util=95.36% 00:17:05.161 nvme0n4: ios=78/512, merge=0/0, ticks=959/281, in_queue=1240, util=94.24% 00:17:05.161 12:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:05.161 [global] 00:17:05.161 thread=1 00:17:05.161 invalidate=1 00:17:05.161 rw=write 00:17:05.161 time_based=1 00:17:05.161 runtime=1 00:17:05.161 ioengine=libaio 00:17:05.161 direct=1 00:17:05.161 bs=4096 00:17:05.161 iodepth=128 00:17:05.161 norandommap=0 00:17:05.161 numjobs=1 00:17:05.161 00:17:05.161 verify_dump=1 00:17:05.161 verify_backlog=512 00:17:05.161 verify_state_save=0 00:17:05.161 do_verify=1 00:17:05.161 verify=crc32c-intel 00:17:05.161 [job0] 00:17:05.161 filename=/dev/nvme0n1 00:17:05.161 [job1] 00:17:05.161 filename=/dev/nvme0n2 00:17:05.161 [job2] 00:17:05.161 filename=/dev/nvme0n3 00:17:05.161 [job3] 00:17:05.161 filename=/dev/nvme0n4 00:17:05.161 Could not set queue depth (nvme0n1) 00:17:05.161 Could not set queue depth (nvme0n2) 00:17:05.161 Could not set queue depth (nvme0n3) 00:17:05.161 Could not set queue depth (nvme0n4) 00:17:05.423 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.423 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.423 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.423 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.423 fio-3.35 00:17:05.423 Starting 4 threads 00:17:06.810 00:17:06.810 job0: (groupid=0, jobs=1): err= 0: pid=634325: Mon Jun 10 12:21:12 2024 00:17:06.810 read: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1007msec) 00:17:06.810 slat (nsec): min=930, max=13235k, avg=116065.67, stdev=796600.93 00:17:06.810 clat (usec): min=1490, max=52817, avg=14715.10, stdev=8810.49 00:17:06.810 lat (usec): min=2298, max=52818, avg=14831.16, stdev=8887.25 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 3884], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 7177], 00:17:06.810 | 30.00th=[ 8160], 40.00th=[10028], 50.00th=[11207], 60.00th=[13566], 00:17:06.810 | 70.00th=[19792], 80.00th=[22676], 90.00th=[28443], 95.00th=[31065], 00:17:06.810 | 99.00th=[40633], 99.50th=[44303], 99.90th=[52691], 99.95th=[52691], 00:17:06.810 | 99.99th=[52691] 00:17:06.810 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:17:06.810 slat (nsec): min=1702, max=30956k, avg=126327.38, stdev=1160470.12 00:17:06.810 clat (usec): min=654, max=81211, avg=15836.46, stdev=11940.17 00:17:06.810 lat (usec): min=688, max=81216, avg=15962.79, stdev=12059.66 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 1663], 5.00th=[ 3654], 10.00th=[ 4883], 20.00th=[ 7046], 00:17:06.810 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[13698], 60.00th=[17957], 00:17:06.810 | 70.00th=[20317], 80.00th=[22152], 90.00th=[25560], 95.00th=[38536], 00:17:06.810 | 99.00th=[67634], 99.50th=[74974], 99.90th=[81265], 99.95th=[81265], 00:17:06.810 | 99.99th=[81265] 00:17:06.810 bw ( KiB/s): min=14992, max=17776, per=20.14%, avg=16384.00, stdev=1968.59, samples=2 00:17:06.810 iops : min= 3748, max= 4444, avg=4096.00, stdev=492.15, samples=2 00:17:06.810 lat (usec) : 750=0.09%, 1000=0.27% 00:17:06.810 lat (msec) : 2=0.32%, 4=3.25%, 10=39.02%, 20=27.17%, 50=28.72% 00:17:06.810 lat (msec) : 100=1.16% 00:17:06.810 cpu : usr=3.18%, sys=4.67%, ctx=243, majf=0, minf=1 00:17:06.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.810 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.810 job1: (groupid=0, jobs=1): err= 0: pid=634328: Mon Jun 10 12:21:12 2024 00:17:06.810 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1007msec) 00:17:06.810 slat (nsec): min=875, max=19914k, avg=83384.45, stdev=699536.80 00:17:06.810 clat (usec): min=1707, max=53236, avg=11946.63, stdev=7439.78 00:17:06.810 lat (usec): min=1712, max=53247, avg=12030.02, stdev=7506.47 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 6521], 00:17:06.810 | 30.00th=[ 7177], 40.00th=[ 8160], 50.00th=[ 9503], 60.00th=[11469], 00:17:06.810 | 70.00th=[13829], 80.00th=[16712], 90.00th=[21365], 95.00th=[23987], 00:17:06.810 | 99.00th=[42730], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:17:06.810 | 99.99th=[53216] 00:17:06.810 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:17:06.810 slat (nsec): min=1479, max=25040k, avg=79579.15, stdev=629571.42 00:17:06.810 clat (usec): min=509, max=57647, avg=10966.48, stdev=8909.13 00:17:06.810 lat (usec): min=516, max=57649, avg=11046.05, stdev=8954.36 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 2114], 5.00th=[ 3818], 10.00th=[ 4228], 20.00th=[ 5145], 00:17:06.810 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 7111], 60.00th=[ 8291], 00:17:06.810 | 70.00th=[ 9896], 80.00th=[19006], 90.00th=[22938], 95.00th=[30278], 00:17:06.810 | 99.00th=[42730], 99.50th=[47973], 99.90th=[55837], 99.95th=[55837], 00:17:06.810 | 99.99th=[57410] 00:17:06.810 bw ( KiB/s): min=22288, max=22722, per=27.66%, avg=22505.00, stdev=306.88, samples=2 00:17:06.810 iops : min= 5572, max= 5680, avg=5626.00, stdev=76.37, samples=2 00:17:06.810 lat (usec) : 750=0.03% 00:17:06.810 lat (msec) : 2=0.53%, 4=3.92%, 10=57.23%, 20=22.76%, 50=15.28% 00:17:06.810 lat (msec) : 100=0.26% 00:17:06.810 cpu : usr=3.58%, sys=5.77%, ctx=441, majf=0, minf=1 00:17:06.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.810 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.810 job2: (groupid=0, jobs=1): err= 0: pid=634334: Mon Jun 10 12:21:12 2024 00:17:06.810 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:17:06.810 slat (nsec): min=969, max=16596k, avg=95299.38, stdev=781871.74 00:17:06.810 clat (usec): min=3018, max=49053, avg=11989.74, stdev=5297.61 00:17:06.810 lat (usec): min=4117, max=49069, avg=12085.04, stdev=5373.70 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 8455], 00:17:06.810 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11600], 00:17:06.810 | 70.00th=[12518], 80.00th=[15008], 90.00th=[19006], 95.00th=[21627], 00:17:06.810 | 99.00th=[34341], 99.50th=[41157], 99.90th=[49021], 99.95th=[49021], 00:17:06.810 | 99.99th=[49021] 00:17:06.810 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:06.810 slat (nsec): min=1679, max=12468k, avg=102088.03, stdev=666022.38 00:17:06.810 clat (usec): min=2593, max=76262, avg=13540.87, stdev=13947.27 00:17:06.810 lat (usec): min=2600, max=76270, avg=13642.96, stdev=14048.88 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 3752], 5.00th=[ 5276], 10.00th=[ 5538], 20.00th=[ 7308], 00:17:06.810 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10552], 00:17:06.810 | 70.00th=[11600], 80.00th=[11994], 90.00th=[17695], 95.00th=[53740], 00:17:06.810 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:17:06.810 | 99.99th=[76022] 00:17:06.810 bw ( KiB/s): min=12288, max=28672, per=25.18%, avg=20480.00, stdev=11585.24, samples=2 00:17:06.810 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:17:06.810 lat (msec) : 4=0.75%, 10=54.29%, 20=35.94%, 50=5.71%, 100=3.31% 00:17:06.810 cpu : usr=4.28%, sys=4.98%, ctx=387, majf=0, minf=1 00:17:06.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.810 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.810 job3: (groupid=0, jobs=1): err= 0: pid=634338: Mon Jun 10 12:21:12 2024 00:17:06.810 read: IOPS=5449, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1007msec) 00:17:06.810 slat (nsec): min=1007, max=12064k, avg=84553.80, stdev=583724.80 00:17:06.810 clat (usec): min=1177, max=71315, avg=10226.22, stdev=7447.27 00:17:06.810 lat (usec): min=1205, max=71324, avg=10310.77, stdev=7523.67 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 1975], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6849], 00:17:06.810 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8455], 00:17:06.810 | 70.00th=[ 9634], 80.00th=[11076], 90.00th=[16581], 95.00th=[24511], 00:17:06.810 | 99.00th=[46400], 99.50th=[60031], 99.90th=[66323], 99.95th=[71828], 00:17:06.810 | 99.99th=[71828] 00:17:06.810 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:17:06.810 slat (nsec): min=1650, max=7659.3k, avg=84930.94, stdev=498431.84 00:17:06.810 clat (usec): min=576, max=71292, avg=12677.69, stdev=14987.74 00:17:06.810 lat (usec): min=586, max=71302, avg=12762.62, stdev=15091.68 00:17:06.810 clat percentiles (usec): 00:17:06.810 | 1.00th=[ 1647], 5.00th=[ 3687], 10.00th=[ 4359], 20.00th=[ 5342], 00:17:06.810 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7570], 00:17:06.810 | 70.00th=[ 8356], 80.00th=[12518], 90.00th=[27919], 95.00th=[60556], 00:17:06.810 | 99.00th=[66323], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:17:06.810 | 99.99th=[70779] 00:17:06.810 bw ( KiB/s): min=12000, max=33056, per=27.69%, avg=22528.00, stdev=14888.84, samples=2 00:17:06.810 iops : min= 3000, max= 8264, avg=5632.00, stdev=3722.21, samples=2 00:17:06.810 lat (usec) : 750=0.04%, 1000=0.04% 00:17:06.810 lat (msec) : 2=1.05%, 4=4.05%, 10=69.83%, 20=12.99%, 50=8.19% 00:17:06.810 lat (msec) : 100=3.80% 00:17:06.810 cpu : usr=3.98%, sys=6.36%, ctx=412, majf=0, minf=1 00:17:06.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.810 issued rwts: total=5488,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.810 00:17:06.810 Run status group 0 (all jobs): 00:17:06.810 READ: bw=76.3MiB/s (80.0MB/s), 15.3MiB/s-21.3MiB/s (16.1MB/s-22.4MB/s), io=76.9MiB (80.6MB), run=1005-1007msec 00:17:06.810 WRITE: bw=79.4MiB/s (83.3MB/s), 15.9MiB/s-21.8MiB/s (16.7MB/s-22.9MB/s), io=80.0MiB (83.9MB), run=1005-1007msec 00:17:06.810 00:17:06.810 Disk stats (read/write): 00:17:06.810 nvme0n1: ios=3116/3249, merge=0/0, ticks=25585/29927, in_queue=55512, util=90.38% 00:17:06.811 nvme0n2: ios=4880/5120, merge=0/0, ticks=40363/35990, in_queue=76353, util=91.32% 00:17:06.811 nvme0n3: ios=3632/3910, merge=0/0, ticks=44073/57258, in_queue=101331, util=95.88% 00:17:06.811 nvme0n4: ios=4912/5120, merge=0/0, ticks=31144/47686, in_queue=78830, util=99.36% 00:17:06.811 12:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:06.811 [global] 00:17:06.811 thread=1 00:17:06.811 invalidate=1 00:17:06.811 rw=randwrite 00:17:06.811 time_based=1 00:17:06.811 runtime=1 00:17:06.811 ioengine=libaio 00:17:06.811 direct=1 00:17:06.811 bs=4096 00:17:06.811 iodepth=128 00:17:06.811 norandommap=0 00:17:06.811 numjobs=1 00:17:06.811 00:17:06.811 verify_dump=1 00:17:06.811 verify_backlog=512 00:17:06.811 verify_state_save=0 00:17:06.811 do_verify=1 00:17:06.811 verify=crc32c-intel 00:17:06.811 [job0] 00:17:06.811 filename=/dev/nvme0n1 00:17:06.811 [job1] 00:17:06.811 filename=/dev/nvme0n2 00:17:06.811 [job2] 00:17:06.811 filename=/dev/nvme0n3 00:17:06.811 [job3] 00:17:06.811 filename=/dev/nvme0n4 00:17:06.811 Could not set queue depth (nvme0n1) 00:17:06.811 Could not set queue depth (nvme0n2) 00:17:06.811 Could not set queue depth (nvme0n3) 00:17:06.811 Could not set queue depth (nvme0n4) 00:17:07.071 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.071 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.071 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.071 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.071 fio-3.35 00:17:07.071 Starting 4 threads 00:17:08.490 00:17:08.490 job0: (groupid=0, jobs=1): err= 0: pid=634877: Mon Jun 10 12:21:13 2024 00:17:08.490 read: IOPS=5612, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:17:08.490 slat (nsec): min=849, max=8511.4k, avg=87549.91, stdev=517828.88 00:17:08.490 clat (usec): min=1269, max=23267, avg=10975.82, stdev=2148.47 00:17:08.490 lat (usec): min=3718, max=23294, avg=11063.37, stdev=2185.47 00:17:08.490 clat percentiles (usec): 00:17:08.490 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9372], 00:17:08.490 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:17:08.490 | 70.00th=[11338], 80.00th=[12518], 90.00th=[14091], 95.00th=[15270], 00:17:08.490 | 99.00th=[16909], 99.50th=[18482], 99.90th=[21365], 99.95th=[21365], 00:17:08.490 | 99.99th=[23200] 00:17:08.490 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:17:08.490 slat (nsec): min=1450, max=6848.6k, avg=79499.18, stdev=399627.92 00:17:08.490 clat (usec): min=3723, max=21383, avg=10635.38, stdev=2876.86 00:17:08.490 lat (usec): min=3725, max=21391, avg=10714.87, stdev=2895.86 00:17:08.490 clat percentiles (usec): 00:17:08.490 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 7308], 20.00th=[ 8291], 00:17:08.490 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11076], 00:17:08.490 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14746], 95.00th=[15664], 00:17:08.490 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20055], 99.95th=[20317], 00:17:08.490 | 99.99th=[21365] 00:17:08.490 bw ( KiB/s): min=22840, max=25312, per=25.32%, avg=24076.00, stdev=1747.97, samples=2 00:17:08.490 iops : min= 5710, max= 6328, avg=6019.00, stdev=436.99, samples=2 00:17:08.490 lat (msec) : 2=0.01%, 4=0.27%, 10=40.04%, 20=59.58%, 50=0.10% 00:17:08.490 cpu : usr=2.89%, sys=3.89%, ctx=659, majf=0, minf=1 00:17:08.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:08.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.490 issued rwts: total=5635,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.490 job1: (groupid=0, jobs=1): err= 0: pid=634878: Mon Jun 10 12:21:13 2024 00:17:08.490 read: IOPS=7943, BW=31.0MiB/s (32.5MB/s)(31.2MiB/1004msec) 00:17:08.490 slat (nsec): min=840, max=8163.7k, avg=60747.26, stdev=481919.16 00:17:08.490 clat (usec): min=759, max=18538, avg=8759.73, stdev=2231.99 00:17:08.490 lat (usec): min=784, max=18542, avg=8820.48, stdev=2260.97 00:17:08.490 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 3359], 5.00th=[ 4490], 10.00th=[ 6390], 20.00th=[ 7242], 00:17:08.491 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:17:08.491 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11469], 95.00th=[12387], 00:17:08.491 | 99.00th=[15008], 99.50th=[15926], 99.90th=[18220], 99.95th=[18220], 00:17:08.491 | 99.99th=[18482] 00:17:08.491 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:17:08.491 slat (nsec): min=1465, max=7250.2k, avg=46335.85, stdev=347724.20 00:17:08.491 clat (usec): min=738, max=22736, avg=7033.93, stdev=2470.86 00:17:08.491 lat (usec): min=955, max=22738, avg=7080.26, stdev=2495.97 00:17:08.491 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 1975], 5.00th=[ 3490], 10.00th=[ 4015], 20.00th=[ 4883], 00:17:08.491 | 30.00th=[ 5473], 40.00th=[ 5997], 50.00th=[ 6783], 60.00th=[ 7898], 00:17:08.491 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10421], 00:17:08.491 | 99.00th=[14746], 99.50th=[16319], 99.90th=[19530], 99.95th=[19530], 00:17:08.491 | 99.99th=[22676] 00:17:08.491 bw ( KiB/s): min=32256, max=33280, per=34.45%, avg=32768.00, stdev=724.08, samples=2 00:17:08.491 iops : min= 8064, max= 8320, avg=8192.00, stdev=181.02, samples=2 00:17:08.491 lat (usec) : 750=0.01%, 1000=0.06% 00:17:08.491 lat (msec) : 2=0.70%, 4=5.52%, 10=78.17%, 20=15.54%, 50=0.01% 00:17:08.491 cpu : usr=5.88%, sys=8.87%, ctx=558, majf=0, minf=1 00:17:08.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:08.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.491 issued rwts: total=7975,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.491 job2: (groupid=0, jobs=1): err= 0: pid=634879: Mon Jun 10 12:21:13 2024 00:17:08.491 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:17:08.491 slat (nsec): min=866, max=19224k, avg=110556.69, stdev=765079.15 00:17:08.491 clat (usec): min=6032, max=50990, avg=14099.31, stdev=8767.01 00:17:08.491 lat (usec): min=6045, max=50997, avg=14209.86, stdev=8842.51 00:17:08.491 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8717], 00:17:08.491 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10290], 00:17:08.491 | 70.00th=[15008], 80.00th=[19268], 90.00th=[28705], 95.00th=[35914], 00:17:08.491 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[50070], 00:17:08.491 | 99.99th=[51119] 00:17:08.491 write: IOPS=5441, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1004msec); 0 zone resets 00:17:08.491 slat (nsec): min=1471, max=9944.5k, avg=75250.81, stdev=446769.65 00:17:08.491 clat (usec): min=3230, max=37487, avg=10013.08, stdev=3672.07 00:17:08.491 lat (usec): min=3238, max=37493, avg=10088.33, stdev=3702.79 00:17:08.491 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 5407], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8160], 00:17:08.491 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:17:08.491 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[12780], 95.00th=[17957], 00:17:08.491 | 99.00th=[27919], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:17:08.491 | 99.99th=[37487] 00:17:08.491 bw ( KiB/s): min=19360, max=23328, per=22.44%, avg=21344.00, stdev=2805.80, samples=2 00:17:08.491 iops : min= 4840, max= 5832, avg=5336.00, stdev=701.45, samples=2 00:17:08.491 lat (msec) : 4=0.11%, 10=66.09%, 20=22.78%, 50=11.00%, 100=0.02% 00:17:08.491 cpu : usr=4.39%, sys=3.59%, ctx=490, majf=0, minf=1 00:17:08.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:08.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.491 issued rwts: total=5120,5463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.491 job3: (groupid=0, jobs=1): err= 0: pid=634881: Mon Jun 10 12:21:13 2024 00:17:08.491 read: IOPS=3662, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:17:08.491 slat (nsec): min=901, max=45975k, avg=146431.98, stdev=1075318.31 00:17:08.491 clat (usec): min=1129, max=77494, avg=18597.94, stdev=11410.14 00:17:08.491 lat (usec): min=4482, max=77499, avg=18744.37, stdev=11440.28 00:17:08.491 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[12387], 00:17:08.491 | 30.00th=[14615], 40.00th=[15664], 50.00th=[16712], 60.00th=[17957], 00:17:08.491 | 70.00th=[18744], 80.00th=[20841], 90.00th=[25560], 95.00th=[30278], 00:17:08.491 | 99.00th=[76022], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:17:08.491 | 99.99th=[77071] 00:17:08.491 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:17:08.491 slat (nsec): min=1483, max=7998.7k, avg=107948.04, stdev=587244.11 00:17:08.491 clat (usec): min=6982, max=22920, avg=14261.25, stdev=3486.03 00:17:08.491 lat (usec): min=6985, max=22927, avg=14369.20, stdev=3462.75 00:17:08.491 clat percentiles (usec): 00:17:08.491 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[11731], 00:17:08.491 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13566], 60.00th=[14746], 00:17:08.491 | 70.00th=[15664], 80.00th=[17433], 90.00th=[19268], 95.00th=[21103], 00:17:08.491 | 99.00th=[22414], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:17:08.491 | 99.99th=[22938] 00:17:08.491 bw ( KiB/s): min=16136, max=16384, per=17.10%, avg=16260.00, stdev=175.36, samples=2 00:17:08.491 iops : min= 4034, max= 4096, avg=4065.00, stdev=43.84, samples=2 00:17:08.491 lat (msec) : 2=0.01%, 10=9.01%, 20=74.60%, 50=14.74%, 100=1.63% 00:17:08.491 cpu : usr=2.79%, sys=4.28%, ctx=306, majf=0, minf=1 00:17:08.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:08.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.491 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.491 00:17:08.491 Run status group 0 (all jobs): 00:17:08.491 READ: bw=87.1MiB/s (91.3MB/s), 14.3MiB/s-31.0MiB/s (15.0MB/s-32.5MB/s), io=87.5MiB (91.8MB), run=1004-1005msec 00:17:08.491 WRITE: bw=92.9MiB/s (97.4MB/s), 15.9MiB/s-31.9MiB/s (16.7MB/s-33.4MB/s), io=93.3MiB (97.9MB), run=1004-1005msec 00:17:08.491 00:17:08.491 Disk stats (read/write): 00:17:08.491 nvme0n1: ios=4648/4988, merge=0/0, ticks=24594/24520, in_queue=49114, util=92.69% 00:17:08.491 nvme0n2: ios=6680/7168, merge=0/0, ticks=52641/45556, in_queue=98197, util=85.52% 00:17:08.491 nvme0n3: ios=4096/4563, merge=0/0, ticks=19429/13968, in_queue=33397, util=88.40% 00:17:08.491 nvme0n4: ios=3047/3072, merge=0/0, ticks=15394/10185, in_queue=25579, util=91.04% 00:17:08.491 12:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:08.491 12:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=635214 00:17:08.491 12:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:08.491 12:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:08.491 [global] 00:17:08.491 thread=1 00:17:08.491 invalidate=1 00:17:08.491 rw=read 00:17:08.491 time_based=1 00:17:08.491 runtime=10 00:17:08.491 ioengine=libaio 00:17:08.491 direct=1 00:17:08.491 bs=4096 00:17:08.491 iodepth=1 00:17:08.491 norandommap=1 00:17:08.491 numjobs=1 00:17:08.491 00:17:08.491 [job0] 00:17:08.491 filename=/dev/nvme0n1 00:17:08.491 [job1] 00:17:08.491 filename=/dev/nvme0n2 00:17:08.491 [job2] 00:17:08.491 filename=/dev/nvme0n3 00:17:08.491 [job3] 00:17:08.491 filename=/dev/nvme0n4 00:17:08.491 Could not set queue depth (nvme0n1) 00:17:08.491 Could not set queue depth (nvme0n2) 00:17:08.491 Could not set queue depth (nvme0n3) 00:17:08.491 Could not set queue depth (nvme0n4) 00:17:08.763 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.763 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.763 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.763 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.763 fio-3.35 00:17:08.763 Starting 4 threads 00:17:11.310 12:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:11.571 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=880640, buflen=4096 00:17:11.571 fio: pid=635406, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:11.571 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:11.831 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3117056, buflen=4096 00:17:11.831 fio: pid=635405, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:11.831 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:11.831 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:11.831 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=294912, buflen=4096 00:17:11.831 fio: pid=635402, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:11.831 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:11.831 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:12.093 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:12.093 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:12.093 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=311296, buflen=4096 00:17:12.093 fio: pid=635403, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:12.093 00:17:12.093 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=635402: Mon Jun 10 12:21:17 2024 00:17:12.093 read: IOPS=25, BW=98.8KiB/s (101kB/s)(288KiB/2914msec) 00:17:12.093 slat (usec): min=25, max=249, avg=28.77, stdev=26.18 00:17:12.093 clat (usec): min=683, max=41722, avg=40426.48, stdev=4751.11 00:17:12.093 lat (usec): min=721, max=41971, avg=40455.28, stdev=4750.63 00:17:12.093 clat percentiles (usec): 00:17:12.093 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:12.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:12.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:12.093 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:12.093 | 99.99th=[41681] 00:17:12.093 bw ( KiB/s): min= 96, max= 104, per=6.82%, avg=99.20, stdev= 4.38, samples=5 00:17:12.093 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:12.093 lat (usec) : 750=1.37% 00:17:12.093 lat (msec) : 50=97.26% 00:17:12.093 cpu : usr=0.00%, sys=0.10%, ctx=74, majf=0, minf=1 00:17:12.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.093 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=635403: Mon Jun 10 12:21:17 2024 00:17:12.093 read: IOPS=24, BW=98.2KiB/s (101kB/s)(304KiB/3096msec) 00:17:12.093 slat (usec): min=9, max=244, avg=30.08, stdev=32.83 00:17:12.093 clat (usec): min=765, max=42109, avg=40681.47, stdev=6592.20 00:17:12.093 lat (usec): min=800, max=42133, avg=40709.14, stdev=6591.91 00:17:12.093 clat percentiles (usec): 00:17:12.093 | 1.00th=[ 766], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:12.093 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:12.093 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:12.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.093 | 99.99th=[42206] 00:17:12.093 bw ( KiB/s): min= 96, max= 102, per=6.68%, avg=97.00, stdev= 2.45, samples=6 00:17:12.093 iops : min= 24, max= 25, avg=24.17, stdev= 0.41, samples=6 00:17:12.093 lat (usec) : 1000=1.30% 00:17:12.093 lat (msec) : 2=1.30%, 50=96.10% 00:17:12.093 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:17:12.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.093 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=635405: Mon Jun 10 12:21:17 2024 00:17:12.093 read: IOPS=279, BW=1118KiB/s (1145kB/s)(3044KiB/2723msec) 00:17:12.093 slat (usec): min=6, max=15115, avg=63.57, stdev=770.95 00:17:12.093 clat (usec): min=426, max=42060, avg=3506.78, stdev=10083.02 00:17:12.093 lat (usec): min=451, max=42084, avg=3570.40, stdev=10102.34 00:17:12.093 clat percentiles (usec): 00:17:12.093 | 1.00th=[ 515], 5.00th=[ 553], 10.00th=[ 644], 20.00th=[ 734], 00:17:12.093 | 30.00th=[ 816], 40.00th=[ 881], 50.00th=[ 914], 60.00th=[ 947], 00:17:12.093 | 70.00th=[ 971], 80.00th=[ 1012], 90.00th=[ 1074], 95.00th=[41681], 00:17:12.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.093 | 99.99th=[42206] 00:17:12.093 bw ( KiB/s): min= 96, max= 3240, per=60.12%, avg=873.60, stdev=1361.52, samples=5 00:17:12.093 iops : min= 24, max= 810, avg=218.40, stdev=340.38, samples=5 00:17:12.093 lat (usec) : 500=0.66%, 750=21.26%, 1000=55.12% 00:17:12.093 lat (msec) : 2=16.40%, 50=6.43% 00:17:12.093 cpu : usr=0.33%, sys=0.73%, ctx=764, majf=0, minf=1 00:17:12.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 issued rwts: total=762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.093 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=635406: Mon Jun 10 12:21:17 2024 00:17:12.093 read: IOPS=84, BW=337KiB/s (345kB/s)(860KiB/2549msec) 00:17:12.093 slat (nsec): min=6661, max=60626, avg=24570.19, stdev=5377.49 00:17:12.093 clat (usec): min=337, max=42250, avg=11821.89, stdev=18083.51 00:17:12.093 lat (usec): min=362, max=42275, avg=11846.54, stdev=18083.80 00:17:12.093 clat percentiles (usec): 00:17:12.093 | 1.00th=[ 375], 5.00th=[ 494], 10.00th=[ 578], 20.00th=[ 619], 00:17:12.093 | 30.00th=[ 660], 40.00th=[ 758], 50.00th=[ 832], 60.00th=[ 873], 00:17:12.093 | 70.00th=[ 947], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:12.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.093 | 99.99th=[42206] 00:17:12.093 bw ( KiB/s): min= 96, max= 832, per=16.80%, avg=244.80, stdev=328.27, samples=5 00:17:12.093 iops : min= 24, max= 208, avg=61.20, stdev=82.07, samples=5 00:17:12.093 lat (usec) : 500=5.56%, 750=32.41%, 1000=32.87% 00:17:12.093 lat (msec) : 2=1.39%, 50=27.31% 00:17:12.093 cpu : usr=0.04%, sys=0.31%, ctx=216, majf=0, minf=2 00:17:12.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.093 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.093 00:17:12.093 Run status group 0 (all jobs): 00:17:12.093 READ: bw=1452KiB/s (1487kB/s), 98.2KiB/s-1118KiB/s (101kB/s-1145kB/s), io=4496KiB (4604kB), run=2549-3096msec 00:17:12.093 00:17:12.093 Disk stats (read/write): 00:17:12.093 nvme0n1: ios=70/0, merge=0/0, ticks=2831/0, in_queue=2831, util=94.72% 00:17:12.093 nvme0n2: ios=75/0, merge=0/0, ticks=3051/0, in_queue=3051, util=95.60% 00:17:12.093 nvme0n3: ios=639/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.03% 00:17:12.093 nvme0n4: ios=67/0, merge=0/0, ticks=2314/0, in_queue=2314, util=96.06% 00:17:12.355 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:12.355 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:12.355 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:12.355 12:21:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:12.618 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:12.618 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 635214 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:12.878 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:13.137 nvmf hotplug test: fio failed as expected 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:13.137 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.138 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.138 rmmod nvme_tcp 00:17:13.138 rmmod nvme_fabrics 00:17:13.138 rmmod nvme_keyring 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 631143 ']' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 631143 ']' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 631143' 00:17:13.398 killing process with pid 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 631143 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.398 12:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.945 12:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.945 00:17:15.945 real 0m29.361s 00:17:15.945 user 2m36.846s 00:17:15.945 sys 0m9.516s 00:17:15.945 12:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:15.945 12:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 ************************************ 00:17:15.945 END TEST nvmf_fio_target 00:17:15.945 ************************************ 00:17:15.945 12:21:21 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:15.945 12:21:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:15.945 12:21:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:15.945 12:21:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 ************************************ 00:17:15.945 START TEST nvmf_bdevio 00:17:15.945 ************************************ 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:15.945 * Looking for test storage... 00:17:15.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.945 12:21:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.089 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.089 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.089 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.089 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.089 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:24.090 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:24.090 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:24.090 Found net devices under 0000:31:00.0: cvl_0_0 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:24.090 Found net devices under 0000:31:00.1: cvl_0_1 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:17:24.090 00:17:24.090 --- 10.0.0.2 ping statistics --- 00:17:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.090 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:24.090 00:17:24.090 --- 10.0.0.1 ping statistics --- 00:17:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.090 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=641001 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 641001 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 641001 ']' 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:24.090 12:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.091 [2024-06-10 12:21:29.406025] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:24.091 [2024-06-10 12:21:29.406070] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.091 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.091 [2024-06-10 12:21:29.498105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.091 [2024-06-10 12:21:29.583304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.091 [2024-06-10 12:21:29.583367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.091 [2024-06-10 12:21:29.583375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.091 [2024-06-10 12:21:29.583382] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.091 [2024-06-10 12:21:29.583389] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.091 [2024-06-10 12:21:29.583564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:24.091 [2024-06-10 12:21:29.583722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:24.091 [2024-06-10 12:21:29.583880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.091 [2024-06-10 12:21:29.583881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.662 [2024-06-10 12:21:30.247497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.662 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.923 Malloc0 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.923 [2024-06-10 12:21:30.312519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:24.923 { 00:17:24.923 "params": { 00:17:24.923 "name": "Nvme$subsystem", 00:17:24.923 "trtype": "$TEST_TRANSPORT", 00:17:24.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.923 "adrfam": "ipv4", 00:17:24.923 "trsvcid": "$NVMF_PORT", 00:17:24.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.923 "hdgst": ${hdgst:-false}, 00:17:24.923 "ddgst": ${ddgst:-false} 00:17:24.923 }, 00:17:24.923 "method": "bdev_nvme_attach_controller" 00:17:24.923 } 00:17:24.923 EOF 00:17:24.923 )") 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:24.923 12:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:24.923 "params": { 00:17:24.923 "name": "Nvme1", 00:17:24.923 "trtype": "tcp", 00:17:24.923 "traddr": "10.0.0.2", 00:17:24.923 "adrfam": "ipv4", 00:17:24.923 "trsvcid": "4420", 00:17:24.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.923 "hdgst": false, 00:17:24.923 "ddgst": false 00:17:24.923 }, 00:17:24.923 "method": "bdev_nvme_attach_controller" 00:17:24.923 }' 00:17:24.923 [2024-06-10 12:21:30.369371] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:24.923 [2024-06-10 12:21:30.369435] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641135 ] 00:17:24.923 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.923 [2024-06-10 12:21:30.444776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:24.923 [2024-06-10 12:21:30.521232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.923 [2024-06-10 12:21:30.521304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.923 [2024-06-10 12:21:30.521307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.183 I/O targets: 00:17:25.183 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:25.183 00:17:25.183 00:17:25.183 CUnit - A unit testing framework for C - Version 2.1-3 00:17:25.183 http://cunit.sourceforge.net/ 00:17:25.183 00:17:25.183 00:17:25.183 Suite: bdevio tests on: Nvme1n1 00:17:25.444 Test: blockdev write read block ...passed 00:17:25.444 Test: blockdev write zeroes read block ...passed 00:17:25.444 Test: blockdev write zeroes read no split ...passed 00:17:25.444 Test: blockdev write zeroes read split ...passed 00:17:25.444 Test: blockdev write zeroes read split partial ...passed 00:17:25.444 Test: blockdev reset ...[2024-06-10 12:21:30.996590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:25.444 [2024-06-10 12:21:30.996651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2035eb0 (9): Bad file descriptor 00:17:25.444 [2024-06-10 12:21:31.012525] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:25.444 passed 00:17:25.444 Test: blockdev write read 8 blocks ...passed 00:17:25.444 Test: blockdev write read size > 128k ...passed 00:17:25.444 Test: blockdev write read invalid size ...passed 00:17:25.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:25.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:25.704 Test: blockdev write read max offset ...passed 00:17:25.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:25.704 Test: blockdev writev readv 8 blocks ...passed 00:17:25.704 Test: blockdev writev readv 30 x 1block ...passed 00:17:25.704 Test: blockdev writev readv block ...passed 00:17:25.704 Test: blockdev writev readv size > 128k ...passed 00:17:25.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:25.704 Test: blockdev comparev and writev ...[2024-06-10 12:21:31.234714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.234748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.234759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.234765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:25.704 [2024-06-10 12:21:31.235929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:25.704 [2024-06-10 12:21:31.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:25.704 passed 00:17:25.964 Test: blockdev nvme passthru rw ...passed 00:17:25.965 Test: blockdev nvme passthru vendor specific ...[2024-06-10 12:21:31.320654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:25.965 [2024-06-10 12:21:31.320665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:25.965 [2024-06-10 12:21:31.320914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:25.965 [2024-06-10 12:21:31.320922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:25.965 [2024-06-10 12:21:31.321130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:25.965 [2024-06-10 12:21:31.321138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:25.965 [2024-06-10 12:21:31.321336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:25.965 [2024-06-10 12:21:31.321344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:25.965 passed 00:17:25.965 Test: blockdev nvme admin passthru ...passed 00:17:25.965 Test: blockdev copy ...passed 00:17:25.965 00:17:25.965 Run Summary: Type Total Ran Passed Failed Inactive 00:17:25.965 suites 1 1 n/a 0 0 00:17:25.965 tests 23 23 23 0 0 00:17:25.965 asserts 152 152 152 0 n/a 00:17:25.965 00:17:25.965 Elapsed time = 1.198 seconds 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.965 rmmod nvme_tcp 00:17:25.965 rmmod nvme_fabrics 00:17:25.965 rmmod nvme_keyring 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 641001 ']' 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 641001 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 641001 ']' 00:17:25.965 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 641001 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 641001 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 641001' 00:17:26.225 killing process with pid 641001 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 641001 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 641001 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.225 12:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.769 12:21:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.769 00:17:28.769 real 0m12.748s 00:17:28.769 user 0m13.409s 00:17:28.769 sys 0m6.527s 00:17:28.769 12:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:28.769 12:21:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.769 ************************************ 00:17:28.769 END TEST nvmf_bdevio 00:17:28.769 ************************************ 00:17:28.769 12:21:33 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:28.769 12:21:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:28.769 12:21:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:28.769 12:21:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.769 ************************************ 00:17:28.769 START TEST nvmf_auth_target 00:17:28.769 ************************************ 00:17:28.769 12:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:28.769 * Looking for test storage... 00:17:28.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.769 12:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.770 12:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:36.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:36.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:36.907 Found net devices under 0000:31:00.0: cvl_0_0 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:36.907 Found net devices under 0000:31:00.1: cvl_0_1 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.907 12:21:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:17:36.908 00:17:36.908 --- 10.0.0.2 ping statistics --- 00:17:36.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.908 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:17:36.908 00:17:36.908 --- 10.0.0.1 ping statistics --- 00:17:36.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.908 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=646144 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 646144 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 646144 ']' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:36.908 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.478 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:37.478 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:37.478 12:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.478 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:37.478 12:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=646295 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=830df7eb640e6e937f32d263f37a5b70ae5e0ec3c27c47dc 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yKs 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 830df7eb640e6e937f32d263f37a5b70ae5e0ec3c27c47dc 0 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 830df7eb640e6e937f32d263f37a5b70ae5e0ec3c27c47dc 0 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=830df7eb640e6e937f32d263f37a5b70ae5e0ec3c27c47dc 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:37.478 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.741 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yKs 00:17:37.741 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yKs 00:17:37.741 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.yKs 00:17:37.741 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=68f00ae17793472a5536ef6c309fef0f604d1e878ba3a8dd52bda00f8af8e481 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Hrp 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 68f00ae17793472a5536ef6c309fef0f604d1e878ba3a8dd52bda00f8af8e481 3 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 68f00ae17793472a5536ef6c309fef0f604d1e878ba3a8dd52bda00f8af8e481 3 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=68f00ae17793472a5536ef6c309fef0f604d1e878ba3a8dd52bda00f8af8e481 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Hrp 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Hrp 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Hrp 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e26ddcf28d0e2bfbb54c5055ec8ca146 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sjl 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e26ddcf28d0e2bfbb54c5055ec8ca146 1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e26ddcf28d0e2bfbb54c5055ec8ca146 1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e26ddcf28d0e2bfbb54c5055ec8ca146 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sjl 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sjl 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.sjl 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=04d9db17f2889dfc4ea25a29c5866a848fedd54a6be34542 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.r5z 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 04d9db17f2889dfc4ea25a29c5866a848fedd54a6be34542 2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 04d9db17f2889dfc4ea25a29c5866a848fedd54a6be34542 2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=04d9db17f2889dfc4ea25a29c5866a848fedd54a6be34542 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.r5z 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.r5z 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.r5z 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f17a86761c398bc027ab999cdfac821faa018adef4e1611a 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.blL 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f17a86761c398bc027ab999cdfac821faa018adef4e1611a 2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f17a86761c398bc027ab999cdfac821faa018adef4e1611a 2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f17a86761c398bc027ab999cdfac821faa018adef4e1611a 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.blL 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.blL 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.blL 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.742 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=70a2feb256fa6e992ae7ffad1ecf2804 00:17:38.040 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:38.040 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JHa 00:17:38.040 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 70a2feb256fa6e992ae7ffad1ecf2804 1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 70a2feb256fa6e992ae7ffad1ecf2804 1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=70a2feb256fa6e992ae7ffad1ecf2804 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JHa 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JHa 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.JHa 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c3dec5ccb7954650b13bc250868eed78cb0a978996c64480a6df5e5d135a6cae 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yJ1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c3dec5ccb7954650b13bc250868eed78cb0a978996c64480a6df5e5d135a6cae 3 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c3dec5ccb7954650b13bc250868eed78cb0a978996c64480a6df5e5d135a6cae 3 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c3dec5ccb7954650b13bc250868eed78cb0a978996c64480a6df5e5d135a6cae 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yJ1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yJ1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.yJ1 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 646144 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 646144 ']' 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 646295 /var/tmp/host.sock 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 646295 ']' 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:38.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:38.041 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yKs 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yKs 00:17:38.303 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yKs 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Hrp ]] 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hrp 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hrp 00:17:38.564 12:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hrp 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sjl 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sjl 00:17:38.564 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sjl 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.r5z ]] 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5z 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5z 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5z 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.blL 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.blL 00:17:38.826 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.blL 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.JHa ]] 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JHa 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JHa 00:17:39.087 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JHa 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yJ1 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yJ1 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yJ1 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.349 12:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.611 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.873 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.873 { 00:17:39.873 "cntlid": 1, 00:17:39.873 "qid": 0, 00:17:39.873 "state": "enabled", 00:17:39.873 "listen_address": { 00:17:39.873 "trtype": "TCP", 00:17:39.873 "adrfam": "IPv4", 00:17:39.873 "traddr": "10.0.0.2", 00:17:39.873 "trsvcid": "4420" 00:17:39.873 }, 00:17:39.873 "peer_address": { 00:17:39.873 "trtype": "TCP", 00:17:39.873 "adrfam": "IPv4", 00:17:39.873 "traddr": "10.0.0.1", 00:17:39.873 "trsvcid": "35540" 00:17:39.873 }, 00:17:39.873 "auth": { 00:17:39.873 "state": "completed", 00:17:39.873 "digest": "sha256", 00:17:39.873 "dhgroup": "null" 00:17:39.873 } 00:17:39.873 } 00:17:39.873 ]' 00:17:39.873 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.133 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.392 12:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.962 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.223 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.223 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.483 12:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.483 12:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.483 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.483 { 00:17:41.483 "cntlid": 3, 00:17:41.483 "qid": 0, 00:17:41.483 "state": "enabled", 00:17:41.483 "listen_address": { 00:17:41.483 "trtype": "TCP", 00:17:41.483 "adrfam": "IPv4", 00:17:41.483 "traddr": "10.0.0.2", 00:17:41.483 "trsvcid": "4420" 00:17:41.483 }, 00:17:41.483 "peer_address": { 00:17:41.483 "trtype": "TCP", 00:17:41.483 "adrfam": "IPv4", 00:17:41.483 "traddr": "10.0.0.1", 00:17:41.483 "trsvcid": "35566" 00:17:41.483 }, 00:17:41.483 "auth": { 00:17:41.483 "state": "completed", 00:17:41.483 "digest": "sha256", 00:17:41.483 "dhgroup": "null" 00:17:41.483 } 00:17:41.483 } 00:17:41.483 ]' 00:17:41.483 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.483 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.483 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.743 12:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.686 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.947 00:17:42.947 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.947 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.947 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.208 { 00:17:43.208 "cntlid": 5, 00:17:43.208 "qid": 0, 00:17:43.208 "state": "enabled", 00:17:43.208 "listen_address": { 00:17:43.208 "trtype": "TCP", 00:17:43.208 "adrfam": "IPv4", 00:17:43.208 "traddr": "10.0.0.2", 00:17:43.208 "trsvcid": "4420" 00:17:43.208 }, 00:17:43.208 "peer_address": { 00:17:43.208 "trtype": "TCP", 00:17:43.208 "adrfam": "IPv4", 00:17:43.208 "traddr": "10.0.0.1", 00:17:43.208 "trsvcid": "35592" 00:17:43.208 }, 00:17:43.208 "auth": { 00:17:43.208 "state": "completed", 00:17:43.208 "digest": "sha256", 00:17:43.208 "dhgroup": "null" 00:17:43.208 } 00:17:43.208 } 00:17:43.208 ]' 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.208 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.468 12:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.040 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.300 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.559 00:17:44.559 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.559 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.559 12:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.559 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.559 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.559 12:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.560 12:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.818 { 00:17:44.818 "cntlid": 7, 00:17:44.818 "qid": 0, 00:17:44.818 "state": "enabled", 00:17:44.818 "listen_address": { 00:17:44.818 "trtype": "TCP", 00:17:44.818 "adrfam": "IPv4", 00:17:44.818 "traddr": "10.0.0.2", 00:17:44.818 "trsvcid": "4420" 00:17:44.818 }, 00:17:44.818 "peer_address": { 00:17:44.818 "trtype": "TCP", 00:17:44.818 "adrfam": "IPv4", 00:17:44.818 "traddr": "10.0.0.1", 00:17:44.818 "trsvcid": "35622" 00:17:44.818 }, 00:17:44.818 "auth": { 00:17:44.818 "state": "completed", 00:17:44.818 "digest": "sha256", 00:17:44.818 "dhgroup": "null" 00:17:44.818 } 00:17:44.818 } 00:17:44.818 ]' 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.818 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.077 12:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.647 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.907 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.167 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.167 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.427 { 00:17:46.427 "cntlid": 9, 00:17:46.427 "qid": 0, 00:17:46.427 "state": "enabled", 00:17:46.427 "listen_address": { 00:17:46.427 "trtype": "TCP", 00:17:46.427 "adrfam": "IPv4", 00:17:46.427 "traddr": "10.0.0.2", 00:17:46.427 "trsvcid": "4420" 00:17:46.427 }, 00:17:46.427 "peer_address": { 00:17:46.427 "trtype": "TCP", 00:17:46.427 "adrfam": "IPv4", 00:17:46.427 "traddr": "10.0.0.1", 00:17:46.427 "trsvcid": "35640" 00:17:46.427 }, 00:17:46.427 "auth": { 00:17:46.427 "state": "completed", 00:17:46.427 "digest": "sha256", 00:17:46.427 "dhgroup": "ffdhe2048" 00:17:46.427 } 00:17:46.427 } 00:17:46.427 ]' 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.427 12:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.687 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.256 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.516 12:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.776 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.776 { 00:17:47.776 "cntlid": 11, 00:17:47.776 "qid": 0, 00:17:47.776 "state": "enabled", 00:17:47.776 "listen_address": { 00:17:47.776 "trtype": "TCP", 00:17:47.776 "adrfam": "IPv4", 00:17:47.776 "traddr": "10.0.0.2", 00:17:47.776 "trsvcid": "4420" 00:17:47.776 }, 00:17:47.776 "peer_address": { 00:17:47.776 "trtype": "TCP", 00:17:47.776 "adrfam": "IPv4", 00:17:47.776 "traddr": "10.0.0.1", 00:17:47.776 "trsvcid": "36784" 00:17:47.776 }, 00:17:47.776 "auth": { 00:17:47.776 "state": "completed", 00:17:47.776 "digest": "sha256", 00:17:47.776 "dhgroup": "ffdhe2048" 00:17:47.776 } 00:17:47.776 } 00:17:47.776 ]' 00:17:47.776 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.036 12:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.979 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.240 00:17:49.240 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.240 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.240 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.501 { 00:17:49.501 "cntlid": 13, 00:17:49.501 "qid": 0, 00:17:49.501 "state": "enabled", 00:17:49.501 "listen_address": { 00:17:49.501 "trtype": "TCP", 00:17:49.501 "adrfam": "IPv4", 00:17:49.501 "traddr": "10.0.0.2", 00:17:49.501 "trsvcid": "4420" 00:17:49.501 }, 00:17:49.501 "peer_address": { 00:17:49.501 "trtype": "TCP", 00:17:49.501 "adrfam": "IPv4", 00:17:49.501 "traddr": "10.0.0.1", 00:17:49.501 "trsvcid": "36804" 00:17:49.501 }, 00:17:49.501 "auth": { 00:17:49.501 "state": "completed", 00:17:49.501 "digest": "sha256", 00:17:49.501 "dhgroup": "ffdhe2048" 00:17:49.501 } 00:17:49.501 } 00:17:49.501 ]' 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.501 12:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.501 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.501 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.501 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.501 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.501 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.761 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:17:50.331 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.331 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:50.331 12:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.331 12:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.592 12:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.592 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.592 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.592 12:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.592 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.853 00:17:50.853 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.853 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.853 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.121 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.121 { 00:17:51.121 "cntlid": 15, 00:17:51.121 "qid": 0, 00:17:51.122 "state": "enabled", 00:17:51.122 "listen_address": { 00:17:51.122 "trtype": "TCP", 00:17:51.122 "adrfam": "IPv4", 00:17:51.122 "traddr": "10.0.0.2", 00:17:51.122 "trsvcid": "4420" 00:17:51.122 }, 00:17:51.122 "peer_address": { 00:17:51.122 "trtype": "TCP", 00:17:51.122 "adrfam": "IPv4", 00:17:51.122 "traddr": "10.0.0.1", 00:17:51.122 "trsvcid": "36834" 00:17:51.122 }, 00:17:51.122 "auth": { 00:17:51.122 "state": "completed", 00:17:51.122 "digest": "sha256", 00:17:51.122 "dhgroup": "ffdhe2048" 00:17:51.122 } 00:17:51.122 } 00:17:51.122 ]' 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.122 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.383 12:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.953 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.213 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.474 00:17:52.474 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.474 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.474 12:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.736 { 00:17:52.736 "cntlid": 17, 00:17:52.736 "qid": 0, 00:17:52.736 "state": "enabled", 00:17:52.736 "listen_address": { 00:17:52.736 "trtype": "TCP", 00:17:52.736 "adrfam": "IPv4", 00:17:52.736 "traddr": "10.0.0.2", 00:17:52.736 "trsvcid": "4420" 00:17:52.736 }, 00:17:52.736 "peer_address": { 00:17:52.736 "trtype": "TCP", 00:17:52.736 "adrfam": "IPv4", 00:17:52.736 "traddr": "10.0.0.1", 00:17:52.736 "trsvcid": "36852" 00:17:52.736 }, 00:17:52.736 "auth": { 00:17:52.736 "state": "completed", 00:17:52.736 "digest": "sha256", 00:17:52.736 "dhgroup": "ffdhe3072" 00:17:52.736 } 00:17:52.736 } 00:17:52.736 ]' 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.736 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.997 12:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.570 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.832 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.093 00:17:54.093 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.093 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.093 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.093 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.354 { 00:17:54.354 "cntlid": 19, 00:17:54.354 "qid": 0, 00:17:54.354 "state": "enabled", 00:17:54.354 "listen_address": { 00:17:54.354 "trtype": "TCP", 00:17:54.354 "adrfam": "IPv4", 00:17:54.354 "traddr": "10.0.0.2", 00:17:54.354 "trsvcid": "4420" 00:17:54.354 }, 00:17:54.354 "peer_address": { 00:17:54.354 "trtype": "TCP", 00:17:54.354 "adrfam": "IPv4", 00:17:54.354 "traddr": "10.0.0.1", 00:17:54.354 "trsvcid": "36872" 00:17:54.354 }, 00:17:54.354 "auth": { 00:17:54.354 "state": "completed", 00:17:54.354 "digest": "sha256", 00:17:54.354 "dhgroup": "ffdhe3072" 00:17:54.354 } 00:17:54.354 } 00:17:54.354 ]' 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.354 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.615 12:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.234 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.494 12:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.754 00:17:55.754 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.754 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.754 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.754 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.754 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.755 12:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.755 12:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.755 12:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.755 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.755 { 00:17:55.755 "cntlid": 21, 00:17:55.755 "qid": 0, 00:17:55.755 "state": "enabled", 00:17:55.755 "listen_address": { 00:17:55.755 "trtype": "TCP", 00:17:55.755 "adrfam": "IPv4", 00:17:55.755 "traddr": "10.0.0.2", 00:17:55.755 "trsvcid": "4420" 00:17:55.755 }, 00:17:55.755 "peer_address": { 00:17:55.755 "trtype": "TCP", 00:17:55.755 "adrfam": "IPv4", 00:17:55.755 "traddr": "10.0.0.1", 00:17:55.755 "trsvcid": "36886" 00:17:55.755 }, 00:17:55.755 "auth": { 00:17:55.755 "state": "completed", 00:17:55.755 "digest": "sha256", 00:17:55.755 "dhgroup": "ffdhe3072" 00:17:55.755 } 00:17:55.755 } 00:17:55.755 ]' 00:17:55.755 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.014 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.274 12:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.845 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.106 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.366 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.366 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.366 { 00:17:57.366 "cntlid": 23, 00:17:57.366 "qid": 0, 00:17:57.366 "state": "enabled", 00:17:57.366 "listen_address": { 00:17:57.366 "trtype": "TCP", 00:17:57.366 "adrfam": "IPv4", 00:17:57.366 "traddr": "10.0.0.2", 00:17:57.366 "trsvcid": "4420" 00:17:57.366 }, 00:17:57.366 "peer_address": { 00:17:57.366 "trtype": "TCP", 00:17:57.366 "adrfam": "IPv4", 00:17:57.367 "traddr": "10.0.0.1", 00:17:57.367 "trsvcid": "49616" 00:17:57.367 }, 00:17:57.367 "auth": { 00:17:57.367 "state": "completed", 00:17:57.367 "digest": "sha256", 00:17:57.367 "dhgroup": "ffdhe3072" 00:17:57.367 } 00:17:57.367 } 00:17:57.367 ]' 00:17:57.367 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.367 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.367 12:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.628 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.569 12:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.569 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.830 00:17:58.830 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.830 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.830 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.091 { 00:17:59.091 "cntlid": 25, 00:17:59.091 "qid": 0, 00:17:59.091 "state": "enabled", 00:17:59.091 "listen_address": { 00:17:59.091 "trtype": "TCP", 00:17:59.091 "adrfam": "IPv4", 00:17:59.091 "traddr": "10.0.0.2", 00:17:59.091 "trsvcid": "4420" 00:17:59.091 }, 00:17:59.091 "peer_address": { 00:17:59.091 "trtype": "TCP", 00:17:59.091 "adrfam": "IPv4", 00:17:59.091 "traddr": "10.0.0.1", 00:17:59.091 "trsvcid": "49636" 00:17:59.091 }, 00:17:59.091 "auth": { 00:17:59.091 "state": "completed", 00:17:59.091 "digest": "sha256", 00:17:59.091 "dhgroup": "ffdhe4096" 00:17:59.091 } 00:17:59.091 } 00:17:59.091 ]' 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.091 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.351 12:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:17:59.922 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.922 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:59.922 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.922 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.181 12:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.182 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.182 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.441 00:18:00.441 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.442 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.442 12:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.701 { 00:18:00.701 "cntlid": 27, 00:18:00.701 "qid": 0, 00:18:00.701 "state": "enabled", 00:18:00.701 "listen_address": { 00:18:00.701 "trtype": "TCP", 00:18:00.701 "adrfam": "IPv4", 00:18:00.701 "traddr": "10.0.0.2", 00:18:00.701 "trsvcid": "4420" 00:18:00.701 }, 00:18:00.701 "peer_address": { 00:18:00.701 "trtype": "TCP", 00:18:00.701 "adrfam": "IPv4", 00:18:00.701 "traddr": "10.0.0.1", 00:18:00.701 "trsvcid": "49660" 00:18:00.701 }, 00:18:00.701 "auth": { 00:18:00.701 "state": "completed", 00:18:00.701 "digest": "sha256", 00:18:00.701 "dhgroup": "ffdhe4096" 00:18:00.701 } 00:18:00.701 } 00:18:00.701 ]' 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.701 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.961 12:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:01.532 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.793 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.054 00:18:02.054 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.054 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.054 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.315 { 00:18:02.315 "cntlid": 29, 00:18:02.315 "qid": 0, 00:18:02.315 "state": "enabled", 00:18:02.315 "listen_address": { 00:18:02.315 "trtype": "TCP", 00:18:02.315 "adrfam": "IPv4", 00:18:02.315 "traddr": "10.0.0.2", 00:18:02.315 "trsvcid": "4420" 00:18:02.315 }, 00:18:02.315 "peer_address": { 00:18:02.315 "trtype": "TCP", 00:18:02.315 "adrfam": "IPv4", 00:18:02.315 "traddr": "10.0.0.1", 00:18:02.315 "trsvcid": "49690" 00:18:02.315 }, 00:18:02.315 "auth": { 00:18:02.315 "state": "completed", 00:18:02.315 "digest": "sha256", 00:18:02.315 "dhgroup": "ffdhe4096" 00:18:02.315 } 00:18:02.315 } 00:18:02.315 ]' 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.315 12:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.577 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:03.148 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.410 12:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.670 00:18:03.670 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.670 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.670 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.931 { 00:18:03.931 "cntlid": 31, 00:18:03.931 "qid": 0, 00:18:03.931 "state": "enabled", 00:18:03.931 "listen_address": { 00:18:03.931 "trtype": "TCP", 00:18:03.931 "adrfam": "IPv4", 00:18:03.931 "traddr": "10.0.0.2", 00:18:03.931 "trsvcid": "4420" 00:18:03.931 }, 00:18:03.931 "peer_address": { 00:18:03.931 "trtype": "TCP", 00:18:03.931 "adrfam": "IPv4", 00:18:03.931 "traddr": "10.0.0.1", 00:18:03.931 "trsvcid": "49714" 00:18:03.931 }, 00:18:03.931 "auth": { 00:18:03.931 "state": "completed", 00:18:03.931 "digest": "sha256", 00:18:03.931 "dhgroup": "ffdhe4096" 00:18:03.931 } 00:18:03.931 } 00:18:03.931 ]' 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.931 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.193 12:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:04.765 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.765 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:04.765 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.765 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.026 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.288 00:18:05.550 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.550 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.550 12:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.550 { 00:18:05.550 "cntlid": 33, 00:18:05.550 "qid": 0, 00:18:05.550 "state": "enabled", 00:18:05.550 "listen_address": { 00:18:05.550 "trtype": "TCP", 00:18:05.550 "adrfam": "IPv4", 00:18:05.550 "traddr": "10.0.0.2", 00:18:05.550 "trsvcid": "4420" 00:18:05.550 }, 00:18:05.550 "peer_address": { 00:18:05.550 "trtype": "TCP", 00:18:05.550 "adrfam": "IPv4", 00:18:05.550 "traddr": "10.0.0.1", 00:18:05.550 "trsvcid": "49746" 00:18:05.550 }, 00:18:05.550 "auth": { 00:18:05.550 "state": "completed", 00:18:05.550 "digest": "sha256", 00:18:05.550 "dhgroup": "ffdhe6144" 00:18:05.550 } 00:18:05.550 } 00:18:05.550 ]' 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.550 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.811 12:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.754 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.014 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.274 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.274 { 00:18:07.274 "cntlid": 35, 00:18:07.274 "qid": 0, 00:18:07.274 "state": "enabled", 00:18:07.274 "listen_address": { 00:18:07.274 "trtype": "TCP", 00:18:07.274 "adrfam": "IPv4", 00:18:07.274 "traddr": "10.0.0.2", 00:18:07.274 "trsvcid": "4420" 00:18:07.274 }, 00:18:07.274 "peer_address": { 00:18:07.274 "trtype": "TCP", 00:18:07.274 "adrfam": "IPv4", 00:18:07.275 "traddr": "10.0.0.1", 00:18:07.275 "trsvcid": "48070" 00:18:07.275 }, 00:18:07.275 "auth": { 00:18:07.275 "state": "completed", 00:18:07.275 "digest": "sha256", 00:18:07.275 "dhgroup": "ffdhe6144" 00:18:07.275 } 00:18:07.275 } 00:18:07.275 ]' 00:18:07.275 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.275 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.275 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.535 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.535 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.535 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.535 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.535 12:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.535 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.479 12:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.740 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.001 { 00:18:09.001 "cntlid": 37, 00:18:09.001 "qid": 0, 00:18:09.001 "state": "enabled", 00:18:09.001 "listen_address": { 00:18:09.001 "trtype": "TCP", 00:18:09.001 "adrfam": "IPv4", 00:18:09.001 "traddr": "10.0.0.2", 00:18:09.001 "trsvcid": "4420" 00:18:09.001 }, 00:18:09.001 "peer_address": { 00:18:09.001 "trtype": "TCP", 00:18:09.001 "adrfam": "IPv4", 00:18:09.001 "traddr": "10.0.0.1", 00:18:09.001 "trsvcid": "48106" 00:18:09.001 }, 00:18:09.001 "auth": { 00:18:09.001 "state": "completed", 00:18:09.001 "digest": "sha256", 00:18:09.001 "dhgroup": "ffdhe6144" 00:18:09.001 } 00:18:09.001 } 00:18:09.001 ]' 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.001 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.263 12:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.203 12:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.464 00:18:10.464 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.464 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.464 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.724 { 00:18:10.724 "cntlid": 39, 00:18:10.724 "qid": 0, 00:18:10.724 "state": "enabled", 00:18:10.724 "listen_address": { 00:18:10.724 "trtype": "TCP", 00:18:10.724 "adrfam": "IPv4", 00:18:10.724 "traddr": "10.0.0.2", 00:18:10.724 "trsvcid": "4420" 00:18:10.724 }, 00:18:10.724 "peer_address": { 00:18:10.724 "trtype": "TCP", 00:18:10.724 "adrfam": "IPv4", 00:18:10.724 "traddr": "10.0.0.1", 00:18:10.724 "trsvcid": "48136" 00:18:10.724 }, 00:18:10.724 "auth": { 00:18:10.724 "state": "completed", 00:18:10.724 "digest": "sha256", 00:18:10.724 "dhgroup": "ffdhe6144" 00:18:10.724 } 00:18:10.724 } 00:18:10.724 ]' 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.724 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.984 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.984 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.985 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.985 12:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:12.005 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.005 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.006 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.578 00:18:12.578 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.578 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.578 12:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.578 { 00:18:12.578 "cntlid": 41, 00:18:12.578 "qid": 0, 00:18:12.578 "state": "enabled", 00:18:12.578 "listen_address": { 00:18:12.578 "trtype": "TCP", 00:18:12.578 "adrfam": "IPv4", 00:18:12.578 "traddr": "10.0.0.2", 00:18:12.578 "trsvcid": "4420" 00:18:12.578 }, 00:18:12.578 "peer_address": { 00:18:12.578 "trtype": "TCP", 00:18:12.578 "adrfam": "IPv4", 00:18:12.578 "traddr": "10.0.0.1", 00:18:12.578 "trsvcid": "48162" 00:18:12.578 }, 00:18:12.578 "auth": { 00:18:12.578 "state": "completed", 00:18:12.578 "digest": "sha256", 00:18:12.578 "dhgroup": "ffdhe8192" 00:18:12.578 } 00:18:12.578 } 00:18:12.578 ]' 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.578 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.579 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.841 12:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.784 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.785 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.362 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.362 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.362 { 00:18:14.362 "cntlid": 43, 00:18:14.362 "qid": 0, 00:18:14.362 "state": "enabled", 00:18:14.362 "listen_address": { 00:18:14.362 "trtype": "TCP", 00:18:14.362 "adrfam": "IPv4", 00:18:14.362 "traddr": "10.0.0.2", 00:18:14.362 "trsvcid": "4420" 00:18:14.362 }, 00:18:14.362 "peer_address": { 00:18:14.362 "trtype": "TCP", 00:18:14.362 "adrfam": "IPv4", 00:18:14.362 "traddr": "10.0.0.1", 00:18:14.362 "trsvcid": "48198" 00:18:14.362 }, 00:18:14.362 "auth": { 00:18:14.362 "state": "completed", 00:18:14.362 "digest": "sha256", 00:18:14.362 "dhgroup": "ffdhe8192" 00:18:14.362 } 00:18:14.362 } 00:18:14.362 ]' 00:18:14.623 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.623 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.623 12:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.623 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.623 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.623 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.623 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.623 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.882 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.453 12:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.714 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.285 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.285 { 00:18:16.285 "cntlid": 45, 00:18:16.285 "qid": 0, 00:18:16.285 "state": "enabled", 00:18:16.285 "listen_address": { 00:18:16.285 "trtype": "TCP", 00:18:16.285 "adrfam": "IPv4", 00:18:16.285 "traddr": "10.0.0.2", 00:18:16.285 "trsvcid": "4420" 00:18:16.285 }, 00:18:16.285 "peer_address": { 00:18:16.285 "trtype": "TCP", 00:18:16.285 "adrfam": "IPv4", 00:18:16.285 "traddr": "10.0.0.1", 00:18:16.285 "trsvcid": "48228" 00:18:16.285 }, 00:18:16.285 "auth": { 00:18:16.285 "state": "completed", 00:18:16.285 "digest": "sha256", 00:18:16.285 "dhgroup": "ffdhe8192" 00:18:16.285 } 00:18:16.285 } 00:18:16.285 ]' 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.285 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.545 12:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.545 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.485 12:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.055 00:18:18.055 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.055 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.055 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.316 { 00:18:18.316 "cntlid": 47, 00:18:18.316 "qid": 0, 00:18:18.316 "state": "enabled", 00:18:18.316 "listen_address": { 00:18:18.316 "trtype": "TCP", 00:18:18.316 "adrfam": "IPv4", 00:18:18.316 "traddr": "10.0.0.2", 00:18:18.316 "trsvcid": "4420" 00:18:18.316 }, 00:18:18.316 "peer_address": { 00:18:18.316 "trtype": "TCP", 00:18:18.316 "adrfam": "IPv4", 00:18:18.316 "traddr": "10.0.0.1", 00:18:18.316 "trsvcid": "51932" 00:18:18.316 }, 00:18:18.316 "auth": { 00:18:18.316 "state": "completed", 00:18:18.316 "digest": "sha256", 00:18:18.316 "dhgroup": "ffdhe8192" 00:18:18.316 } 00:18:18.316 } 00:18:18.316 ]' 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.316 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.576 12:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.147 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.408 12:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.668 00:18:19.668 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.668 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.668 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.668 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.668 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.928 { 00:18:19.928 "cntlid": 49, 00:18:19.928 "qid": 0, 00:18:19.928 "state": "enabled", 00:18:19.928 "listen_address": { 00:18:19.928 "trtype": "TCP", 00:18:19.928 "adrfam": "IPv4", 00:18:19.928 "traddr": "10.0.0.2", 00:18:19.928 "trsvcid": "4420" 00:18:19.928 }, 00:18:19.928 "peer_address": { 00:18:19.928 "trtype": "TCP", 00:18:19.928 "adrfam": "IPv4", 00:18:19.928 "traddr": "10.0.0.1", 00:18:19.928 "trsvcid": "51952" 00:18:19.928 }, 00:18:19.928 "auth": { 00:18:19.928 "state": "completed", 00:18:19.928 "digest": "sha384", 00:18:19.928 "dhgroup": "null" 00:18:19.928 } 00:18:19.928 } 00:18:19.928 ]' 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.928 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.188 12:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.758 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.019 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.279 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.279 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.539 { 00:18:21.539 "cntlid": 51, 00:18:21.539 "qid": 0, 00:18:21.539 "state": "enabled", 00:18:21.539 "listen_address": { 00:18:21.539 "trtype": "TCP", 00:18:21.539 "adrfam": "IPv4", 00:18:21.539 "traddr": "10.0.0.2", 00:18:21.539 "trsvcid": "4420" 00:18:21.539 }, 00:18:21.539 "peer_address": { 00:18:21.539 "trtype": "TCP", 00:18:21.539 "adrfam": "IPv4", 00:18:21.539 "traddr": "10.0.0.1", 00:18:21.539 "trsvcid": "51984" 00:18:21.539 }, 00:18:21.539 "auth": { 00:18:21.539 "state": "completed", 00:18:21.539 "digest": "sha384", 00:18:21.539 "dhgroup": "null" 00:18:21.539 } 00:18:21.539 } 00:18:21.539 ]' 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.539 12:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.539 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.539 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.539 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.799 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.369 12:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.629 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.889 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.889 { 00:18:22.889 "cntlid": 53, 00:18:22.889 "qid": 0, 00:18:22.889 "state": "enabled", 00:18:22.889 "listen_address": { 00:18:22.889 "trtype": "TCP", 00:18:22.889 "adrfam": "IPv4", 00:18:22.889 "traddr": "10.0.0.2", 00:18:22.889 "trsvcid": "4420" 00:18:22.889 }, 00:18:22.889 "peer_address": { 00:18:22.889 "trtype": "TCP", 00:18:22.889 "adrfam": "IPv4", 00:18:22.889 "traddr": "10.0.0.1", 00:18:22.889 "trsvcid": "52018" 00:18:22.889 }, 00:18:22.889 "auth": { 00:18:22.889 "state": "completed", 00:18:22.889 "digest": "sha384", 00:18:22.889 "dhgroup": "null" 00:18:22.889 } 00:18:22.889 } 00:18:22.889 ]' 00:18:22.889 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.149 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.409 12:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.979 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.239 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.499 00:18:24.499 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.499 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.499 12:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.499 { 00:18:24.499 "cntlid": 55, 00:18:24.499 "qid": 0, 00:18:24.499 "state": "enabled", 00:18:24.499 "listen_address": { 00:18:24.499 "trtype": "TCP", 00:18:24.499 "adrfam": "IPv4", 00:18:24.499 "traddr": "10.0.0.2", 00:18:24.499 "trsvcid": "4420" 00:18:24.499 }, 00:18:24.499 "peer_address": { 00:18:24.499 "trtype": "TCP", 00:18:24.499 "adrfam": "IPv4", 00:18:24.499 "traddr": "10.0.0.1", 00:18:24.499 "trsvcid": "52032" 00:18:24.499 }, 00:18:24.499 "auth": { 00:18:24.499 "state": "completed", 00:18:24.499 "digest": "sha384", 00:18:24.499 "dhgroup": "null" 00:18:24.499 } 00:18:24.499 } 00:18:24.499 ]' 00:18:24.499 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.760 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.020 12:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:25.592 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.593 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.854 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.116 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.116 { 00:18:26.116 "cntlid": 57, 00:18:26.116 "qid": 0, 00:18:26.116 "state": "enabled", 00:18:26.116 "listen_address": { 00:18:26.116 "trtype": "TCP", 00:18:26.116 "adrfam": "IPv4", 00:18:26.116 "traddr": "10.0.0.2", 00:18:26.116 "trsvcid": "4420" 00:18:26.116 }, 00:18:26.116 "peer_address": { 00:18:26.116 "trtype": "TCP", 00:18:26.116 "adrfam": "IPv4", 00:18:26.116 "traddr": "10.0.0.1", 00:18:26.116 "trsvcid": "52056" 00:18:26.116 }, 00:18:26.116 "auth": { 00:18:26.116 "state": "completed", 00:18:26.116 "digest": "sha384", 00:18:26.116 "dhgroup": "ffdhe2048" 00:18:26.116 } 00:18:26.116 } 00:18:26.116 ]' 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.116 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.378 12:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.322 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.323 12:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.584 00:18:27.584 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.584 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.584 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.844 { 00:18:27.844 "cntlid": 59, 00:18:27.844 "qid": 0, 00:18:27.844 "state": "enabled", 00:18:27.844 "listen_address": { 00:18:27.844 "trtype": "TCP", 00:18:27.844 "adrfam": "IPv4", 00:18:27.844 "traddr": "10.0.0.2", 00:18:27.844 "trsvcid": "4420" 00:18:27.844 }, 00:18:27.844 "peer_address": { 00:18:27.844 "trtype": "TCP", 00:18:27.844 "adrfam": "IPv4", 00:18:27.844 "traddr": "10.0.0.1", 00:18:27.844 "trsvcid": "40786" 00:18:27.844 }, 00:18:27.844 "auth": { 00:18:27.844 "state": "completed", 00:18:27.844 "digest": "sha384", 00:18:27.844 "dhgroup": "ffdhe2048" 00:18:27.844 } 00:18:27.844 } 00:18:27.844 ]' 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.844 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.845 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.845 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.845 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.845 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.845 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.105 12:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.675 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.938 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.228 00:18:29.228 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.228 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.229 { 00:18:29.229 "cntlid": 61, 00:18:29.229 "qid": 0, 00:18:29.229 "state": "enabled", 00:18:29.229 "listen_address": { 00:18:29.229 "trtype": "TCP", 00:18:29.229 "adrfam": "IPv4", 00:18:29.229 "traddr": "10.0.0.2", 00:18:29.229 "trsvcid": "4420" 00:18:29.229 }, 00:18:29.229 "peer_address": { 00:18:29.229 "trtype": "TCP", 00:18:29.229 "adrfam": "IPv4", 00:18:29.229 "traddr": "10.0.0.1", 00:18:29.229 "trsvcid": "40816" 00:18:29.229 }, 00:18:29.229 "auth": { 00:18:29.229 "state": "completed", 00:18:29.229 "digest": "sha384", 00:18:29.229 "dhgroup": "ffdhe2048" 00:18:29.229 } 00:18:29.229 } 00:18:29.229 ]' 00:18:29.229 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.490 12:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.490 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:30.433 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.434 12:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.695 00:18:30.695 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.695 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.695 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.957 { 00:18:30.957 "cntlid": 63, 00:18:30.957 "qid": 0, 00:18:30.957 "state": "enabled", 00:18:30.957 "listen_address": { 00:18:30.957 "trtype": "TCP", 00:18:30.957 "adrfam": "IPv4", 00:18:30.957 "traddr": "10.0.0.2", 00:18:30.957 "trsvcid": "4420" 00:18:30.957 }, 00:18:30.957 "peer_address": { 00:18:30.957 "trtype": "TCP", 00:18:30.957 "adrfam": "IPv4", 00:18:30.957 "traddr": "10.0.0.1", 00:18:30.957 "trsvcid": "40838" 00:18:30.957 }, 00:18:30.957 "auth": { 00:18:30.957 "state": "completed", 00:18:30.957 "digest": "sha384", 00:18:30.957 "dhgroup": "ffdhe2048" 00:18:30.957 } 00:18:30.957 } 00:18:30.957 ]' 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.957 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.218 12:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:31.791 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.791 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:31.791 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.791 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.053 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.315 00:18:32.315 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.315 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.315 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.576 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.576 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.576 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.576 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.577 12:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.577 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.577 { 00:18:32.577 "cntlid": 65, 00:18:32.577 "qid": 0, 00:18:32.577 "state": "enabled", 00:18:32.577 "listen_address": { 00:18:32.577 "trtype": "TCP", 00:18:32.577 "adrfam": "IPv4", 00:18:32.577 "traddr": "10.0.0.2", 00:18:32.577 "trsvcid": "4420" 00:18:32.577 }, 00:18:32.577 "peer_address": { 00:18:32.577 "trtype": "TCP", 00:18:32.577 "adrfam": "IPv4", 00:18:32.577 "traddr": "10.0.0.1", 00:18:32.577 "trsvcid": "40872" 00:18:32.577 }, 00:18:32.577 "auth": { 00:18:32.577 "state": "completed", 00:18:32.577 "digest": "sha384", 00:18:32.577 "dhgroup": "ffdhe3072" 00:18:32.577 } 00:18:32.577 } 00:18:32.577 ]' 00:18:32.577 12:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.577 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.838 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.411 12:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.673 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.935 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.935 { 00:18:33.935 "cntlid": 67, 00:18:33.935 "qid": 0, 00:18:33.935 "state": "enabled", 00:18:33.935 "listen_address": { 00:18:33.935 "trtype": "TCP", 00:18:33.935 "adrfam": "IPv4", 00:18:33.935 "traddr": "10.0.0.2", 00:18:33.935 "trsvcid": "4420" 00:18:33.935 }, 00:18:33.935 "peer_address": { 00:18:33.935 "trtype": "TCP", 00:18:33.935 "adrfam": "IPv4", 00:18:33.935 "traddr": "10.0.0.1", 00:18:33.935 "trsvcid": "40908" 00:18:33.935 }, 00:18:33.935 "auth": { 00:18:33.935 "state": "completed", 00:18:33.935 "digest": "sha384", 00:18:33.935 "dhgroup": "ffdhe3072" 00:18:33.935 } 00:18:33.935 } 00:18:33.935 ]' 00:18:33.935 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.197 12:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.142 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.404 00:18:35.404 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.404 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.404 12:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.666 { 00:18:35.666 "cntlid": 69, 00:18:35.666 "qid": 0, 00:18:35.666 "state": "enabled", 00:18:35.666 "listen_address": { 00:18:35.666 "trtype": "TCP", 00:18:35.666 "adrfam": "IPv4", 00:18:35.666 "traddr": "10.0.0.2", 00:18:35.666 "trsvcid": "4420" 00:18:35.666 }, 00:18:35.666 "peer_address": { 00:18:35.666 "trtype": "TCP", 00:18:35.666 "adrfam": "IPv4", 00:18:35.666 "traddr": "10.0.0.1", 00:18:35.666 "trsvcid": "40924" 00:18:35.666 }, 00:18:35.666 "auth": { 00:18:35.666 "state": "completed", 00:18:35.666 "digest": "sha384", 00:18:35.666 "dhgroup": "ffdhe3072" 00:18:35.666 } 00:18:35.666 } 00:18:35.666 ]' 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.666 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.928 12:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.501 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.762 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.023 00:18:37.023 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.023 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.023 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.284 { 00:18:37.284 "cntlid": 71, 00:18:37.284 "qid": 0, 00:18:37.284 "state": "enabled", 00:18:37.284 "listen_address": { 00:18:37.284 "trtype": "TCP", 00:18:37.284 "adrfam": "IPv4", 00:18:37.284 "traddr": "10.0.0.2", 00:18:37.284 "trsvcid": "4420" 00:18:37.284 }, 00:18:37.284 "peer_address": { 00:18:37.284 "trtype": "TCP", 00:18:37.284 "adrfam": "IPv4", 00:18:37.284 "traddr": "10.0.0.1", 00:18:37.284 "trsvcid": "56968" 00:18:37.284 }, 00:18:37.284 "auth": { 00:18:37.284 "state": "completed", 00:18:37.284 "digest": "sha384", 00:18:37.284 "dhgroup": "ffdhe3072" 00:18:37.284 } 00:18:37.284 } 00:18:37.284 ]' 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.284 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.545 12:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.118 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.379 12:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.641 00:18:38.641 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.641 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.641 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.902 { 00:18:38.902 "cntlid": 73, 00:18:38.902 "qid": 0, 00:18:38.902 "state": "enabled", 00:18:38.902 "listen_address": { 00:18:38.902 "trtype": "TCP", 00:18:38.902 "adrfam": "IPv4", 00:18:38.902 "traddr": "10.0.0.2", 00:18:38.902 "trsvcid": "4420" 00:18:38.902 }, 00:18:38.902 "peer_address": { 00:18:38.902 "trtype": "TCP", 00:18:38.902 "adrfam": "IPv4", 00:18:38.902 "traddr": "10.0.0.1", 00:18:38.902 "trsvcid": "56988" 00:18:38.902 }, 00:18:38.902 "auth": { 00:18:38.902 "state": "completed", 00:18:38.902 "digest": "sha384", 00:18:38.902 "dhgroup": "ffdhe4096" 00:18:38.902 } 00:18:38.902 } 00:18:38.902 ]' 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.902 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.164 12:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.736 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.997 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:39.997 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.997 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.997 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.998 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.259 00:18:40.259 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.259 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.259 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.520 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.521 { 00:18:40.521 "cntlid": 75, 00:18:40.521 "qid": 0, 00:18:40.521 "state": "enabled", 00:18:40.521 "listen_address": { 00:18:40.521 "trtype": "TCP", 00:18:40.521 "adrfam": "IPv4", 00:18:40.521 "traddr": "10.0.0.2", 00:18:40.521 "trsvcid": "4420" 00:18:40.521 }, 00:18:40.521 "peer_address": { 00:18:40.521 "trtype": "TCP", 00:18:40.521 "adrfam": "IPv4", 00:18:40.521 "traddr": "10.0.0.1", 00:18:40.521 "trsvcid": "57012" 00:18:40.521 }, 00:18:40.521 "auth": { 00:18:40.521 "state": "completed", 00:18:40.521 "digest": "sha384", 00:18:40.521 "dhgroup": "ffdhe4096" 00:18:40.521 } 00:18:40.521 } 00:18:40.521 ]' 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.521 12:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.521 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.521 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.521 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.782 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.355 12:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.616 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.878 00:18:41.878 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.878 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.878 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.140 { 00:18:42.140 "cntlid": 77, 00:18:42.140 "qid": 0, 00:18:42.140 "state": "enabled", 00:18:42.140 "listen_address": { 00:18:42.140 "trtype": "TCP", 00:18:42.140 "adrfam": "IPv4", 00:18:42.140 "traddr": "10.0.0.2", 00:18:42.140 "trsvcid": "4420" 00:18:42.140 }, 00:18:42.140 "peer_address": { 00:18:42.140 "trtype": "TCP", 00:18:42.140 "adrfam": "IPv4", 00:18:42.140 "traddr": "10.0.0.1", 00:18:42.140 "trsvcid": "57038" 00:18:42.140 }, 00:18:42.140 "auth": { 00:18:42.140 "state": "completed", 00:18:42.140 "digest": "sha384", 00:18:42.140 "dhgroup": "ffdhe4096" 00:18:42.140 } 00:18:42.140 } 00:18:42.140 ]' 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.140 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.402 12:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.974 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.235 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.496 00:18:43.496 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.496 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.496 12:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.756 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.756 { 00:18:43.756 "cntlid": 79, 00:18:43.756 "qid": 0, 00:18:43.757 "state": "enabled", 00:18:43.757 "listen_address": { 00:18:43.757 "trtype": "TCP", 00:18:43.757 "adrfam": "IPv4", 00:18:43.757 "traddr": "10.0.0.2", 00:18:43.757 "trsvcid": "4420" 00:18:43.757 }, 00:18:43.757 "peer_address": { 00:18:43.757 "trtype": "TCP", 00:18:43.757 "adrfam": "IPv4", 00:18:43.757 "traddr": "10.0.0.1", 00:18:43.757 "trsvcid": "57064" 00:18:43.757 }, 00:18:43.757 "auth": { 00:18:43.757 "state": "completed", 00:18:43.757 "digest": "sha384", 00:18:43.757 "dhgroup": "ffdhe4096" 00:18:43.757 } 00:18:43.757 } 00:18:43.757 ]' 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.757 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.017 12:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.586 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.845 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.104 00:18:45.104 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.104 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.104 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.365 { 00:18:45.365 "cntlid": 81, 00:18:45.365 "qid": 0, 00:18:45.365 "state": "enabled", 00:18:45.365 "listen_address": { 00:18:45.365 "trtype": "TCP", 00:18:45.365 "adrfam": "IPv4", 00:18:45.365 "traddr": "10.0.0.2", 00:18:45.365 "trsvcid": "4420" 00:18:45.365 }, 00:18:45.365 "peer_address": { 00:18:45.365 "trtype": "TCP", 00:18:45.365 "adrfam": "IPv4", 00:18:45.365 "traddr": "10.0.0.1", 00:18:45.365 "trsvcid": "57078" 00:18:45.365 }, 00:18:45.365 "auth": { 00:18:45.365 "state": "completed", 00:18:45.365 "digest": "sha384", 00:18:45.365 "dhgroup": "ffdhe6144" 00:18:45.365 } 00:18:45.365 } 00:18:45.365 ]' 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.365 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.625 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.625 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.625 12:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.625 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:46.236 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.496 12:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.496 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.761 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.021 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.021 { 00:18:47.022 "cntlid": 83, 00:18:47.022 "qid": 0, 00:18:47.022 "state": "enabled", 00:18:47.022 "listen_address": { 00:18:47.022 "trtype": "TCP", 00:18:47.022 "adrfam": "IPv4", 00:18:47.022 "traddr": "10.0.0.2", 00:18:47.022 "trsvcid": "4420" 00:18:47.022 }, 00:18:47.022 "peer_address": { 00:18:47.022 "trtype": "TCP", 00:18:47.022 "adrfam": "IPv4", 00:18:47.022 "traddr": "10.0.0.1", 00:18:47.022 "trsvcid": "57928" 00:18:47.022 }, 00:18:47.022 "auth": { 00:18:47.022 "state": "completed", 00:18:47.022 "digest": "sha384", 00:18:47.022 "dhgroup": "ffdhe6144" 00:18:47.022 } 00:18:47.022 } 00:18:47.022 ]' 00:18:47.022 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.022 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.022 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.283 12:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.226 12:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.487 00:18:48.487 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.487 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.487 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.749 { 00:18:48.749 "cntlid": 85, 00:18:48.749 "qid": 0, 00:18:48.749 "state": "enabled", 00:18:48.749 "listen_address": { 00:18:48.749 "trtype": "TCP", 00:18:48.749 "adrfam": "IPv4", 00:18:48.749 "traddr": "10.0.0.2", 00:18:48.749 "trsvcid": "4420" 00:18:48.749 }, 00:18:48.749 "peer_address": { 00:18:48.749 "trtype": "TCP", 00:18:48.749 "adrfam": "IPv4", 00:18:48.749 "traddr": "10.0.0.1", 00:18:48.749 "trsvcid": "57960" 00:18:48.749 }, 00:18:48.749 "auth": { 00:18:48.749 "state": "completed", 00:18:48.749 "digest": "sha384", 00:18:48.749 "dhgroup": "ffdhe6144" 00:18:48.749 } 00:18:48.749 } 00:18:48.749 ]' 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.749 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.010 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.010 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.010 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.010 12:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.952 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.214 00:18:50.214 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.214 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.214 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.475 { 00:18:50.475 "cntlid": 87, 00:18:50.475 "qid": 0, 00:18:50.475 "state": "enabled", 00:18:50.475 "listen_address": { 00:18:50.475 "trtype": "TCP", 00:18:50.475 "adrfam": "IPv4", 00:18:50.475 "traddr": "10.0.0.2", 00:18:50.475 "trsvcid": "4420" 00:18:50.475 }, 00:18:50.475 "peer_address": { 00:18:50.475 "trtype": "TCP", 00:18:50.475 "adrfam": "IPv4", 00:18:50.475 "traddr": "10.0.0.1", 00:18:50.475 "trsvcid": "57992" 00:18:50.475 }, 00:18:50.475 "auth": { 00:18:50.475 "state": "completed", 00:18:50.475 "digest": "sha384", 00:18:50.475 "dhgroup": "ffdhe6144" 00:18:50.475 } 00:18:50.475 } 00:18:50.475 ]' 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.475 12:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.475 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.475 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.475 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.475 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.475 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.737 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:51.677 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.678 12:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.678 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.250 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.250 { 00:18:52.250 "cntlid": 89, 00:18:52.250 "qid": 0, 00:18:52.250 "state": "enabled", 00:18:52.250 "listen_address": { 00:18:52.250 "trtype": "TCP", 00:18:52.250 "adrfam": "IPv4", 00:18:52.250 "traddr": "10.0.0.2", 00:18:52.250 "trsvcid": "4420" 00:18:52.250 }, 00:18:52.250 "peer_address": { 00:18:52.250 "trtype": "TCP", 00:18:52.250 "adrfam": "IPv4", 00:18:52.250 "traddr": "10.0.0.1", 00:18:52.250 "trsvcid": "58020" 00:18:52.250 }, 00:18:52.250 "auth": { 00:18:52.250 "state": "completed", 00:18:52.250 "digest": "sha384", 00:18:52.250 "dhgroup": "ffdhe8192" 00:18:52.250 } 00:18:52.250 } 00:18:52.250 ]' 00:18:52.250 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.511 12:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.772 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.344 12:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.606 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.177 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.177 { 00:18:54.177 "cntlid": 91, 00:18:54.177 "qid": 0, 00:18:54.177 "state": "enabled", 00:18:54.177 "listen_address": { 00:18:54.177 "trtype": "TCP", 00:18:54.177 "adrfam": "IPv4", 00:18:54.177 "traddr": "10.0.0.2", 00:18:54.177 "trsvcid": "4420" 00:18:54.177 }, 00:18:54.177 "peer_address": { 00:18:54.177 "trtype": "TCP", 00:18:54.177 "adrfam": "IPv4", 00:18:54.177 "traddr": "10.0.0.1", 00:18:54.177 "trsvcid": "58040" 00:18:54.177 }, 00:18:54.177 "auth": { 00:18:54.177 "state": "completed", 00:18:54.177 "digest": "sha384", 00:18:54.177 "dhgroup": "ffdhe8192" 00:18:54.177 } 00:18:54.177 } 00:18:54.177 ]' 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.177 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.439 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.439 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.439 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.439 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.439 12:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.439 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.382 12:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.953 00:18:55.953 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.953 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.953 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.214 { 00:18:56.214 "cntlid": 93, 00:18:56.214 "qid": 0, 00:18:56.214 "state": "enabled", 00:18:56.214 "listen_address": { 00:18:56.214 "trtype": "TCP", 00:18:56.214 "adrfam": "IPv4", 00:18:56.214 "traddr": "10.0.0.2", 00:18:56.214 "trsvcid": "4420" 00:18:56.214 }, 00:18:56.214 "peer_address": { 00:18:56.214 "trtype": "TCP", 00:18:56.214 "adrfam": "IPv4", 00:18:56.214 "traddr": "10.0.0.1", 00:18:56.214 "trsvcid": "58060" 00:18:56.214 }, 00:18:56.214 "auth": { 00:18:56.214 "state": "completed", 00:18:56.214 "digest": "sha384", 00:18:56.214 "dhgroup": "ffdhe8192" 00:18:56.214 } 00:18:56.214 } 00:18:56.214 ]' 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.214 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.474 12:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.043 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.302 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.303 12:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.876 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.876 { 00:18:57.876 "cntlid": 95, 00:18:57.876 "qid": 0, 00:18:57.876 "state": "enabled", 00:18:57.876 "listen_address": { 00:18:57.876 "trtype": "TCP", 00:18:57.876 "adrfam": "IPv4", 00:18:57.876 "traddr": "10.0.0.2", 00:18:57.876 "trsvcid": "4420" 00:18:57.876 }, 00:18:57.876 "peer_address": { 00:18:57.876 "trtype": "TCP", 00:18:57.876 "adrfam": "IPv4", 00:18:57.876 "traddr": "10.0.0.1", 00:18:57.876 "trsvcid": "56986" 00:18:57.876 }, 00:18:57.876 "auth": { 00:18:57.876 "state": "completed", 00:18:57.876 "digest": "sha384", 00:18:57.876 "dhgroup": "ffdhe8192" 00:18:57.876 } 00:18:57.876 } 00:18:57.876 ]' 00:18:57.876 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.138 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.398 12:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.969 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.230 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:59.230 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.230 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.230 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.231 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.231 00:18:59.492 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.492 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.492 12:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.492 { 00:18:59.492 "cntlid": 97, 00:18:59.492 "qid": 0, 00:18:59.492 "state": "enabled", 00:18:59.492 "listen_address": { 00:18:59.492 "trtype": "TCP", 00:18:59.492 "adrfam": "IPv4", 00:18:59.492 "traddr": "10.0.0.2", 00:18:59.492 "trsvcid": "4420" 00:18:59.492 }, 00:18:59.492 "peer_address": { 00:18:59.492 "trtype": "TCP", 00:18:59.492 "adrfam": "IPv4", 00:18:59.492 "traddr": "10.0.0.1", 00:18:59.492 "trsvcid": "57002" 00:18:59.492 }, 00:18:59.492 "auth": { 00:18:59.492 "state": "completed", 00:18:59.492 "digest": "sha512", 00:18:59.492 "dhgroup": "null" 00:18:59.492 } 00:18:59.492 } 00:18:59.492 ]' 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.492 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.753 12:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.738 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.739 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.739 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.739 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.998 00:19:00.998 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.998 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.998 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.998 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.259 { 00:19:01.259 "cntlid": 99, 00:19:01.259 "qid": 0, 00:19:01.259 "state": "enabled", 00:19:01.259 "listen_address": { 00:19:01.259 "trtype": "TCP", 00:19:01.259 "adrfam": "IPv4", 00:19:01.259 "traddr": "10.0.0.2", 00:19:01.259 "trsvcid": "4420" 00:19:01.259 }, 00:19:01.259 "peer_address": { 00:19:01.259 "trtype": "TCP", 00:19:01.259 "adrfam": "IPv4", 00:19:01.259 "traddr": "10.0.0.1", 00:19:01.259 "trsvcid": "57026" 00:19:01.259 }, 00:19:01.259 "auth": { 00:19:01.259 "state": "completed", 00:19:01.259 "digest": "sha512", 00:19:01.259 "dhgroup": "null" 00:19:01.259 } 00:19:01.259 } 00:19:01.259 ]' 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.259 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.260 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.520 12:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.090 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.351 12:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.611 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.611 { 00:19:02.611 "cntlid": 101, 00:19:02.611 "qid": 0, 00:19:02.611 "state": "enabled", 00:19:02.611 "listen_address": { 00:19:02.611 "trtype": "TCP", 00:19:02.611 "adrfam": "IPv4", 00:19:02.611 "traddr": "10.0.0.2", 00:19:02.611 "trsvcid": "4420" 00:19:02.611 }, 00:19:02.611 "peer_address": { 00:19:02.611 "trtype": "TCP", 00:19:02.611 "adrfam": "IPv4", 00:19:02.611 "traddr": "10.0.0.1", 00:19:02.611 "trsvcid": "57054" 00:19:02.611 }, 00:19:02.611 "auth": { 00:19:02.611 "state": "completed", 00:19:02.611 "digest": "sha512", 00:19:02.611 "dhgroup": "null" 00:19:02.611 } 00:19:02.611 } 00:19:02.611 ]' 00:19:02.611 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.871 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.131 12:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.701 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.961 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.222 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.222 { 00:19:04.222 "cntlid": 103, 00:19:04.222 "qid": 0, 00:19:04.222 "state": "enabled", 00:19:04.222 "listen_address": { 00:19:04.222 "trtype": "TCP", 00:19:04.222 "adrfam": "IPv4", 00:19:04.222 "traddr": "10.0.0.2", 00:19:04.222 "trsvcid": "4420" 00:19:04.222 }, 00:19:04.222 "peer_address": { 00:19:04.222 "trtype": "TCP", 00:19:04.222 "adrfam": "IPv4", 00:19:04.222 "traddr": "10.0.0.1", 00:19:04.222 "trsvcid": "57082" 00:19:04.222 }, 00:19:04.222 "auth": { 00:19:04.222 "state": "completed", 00:19:04.222 "digest": "sha512", 00:19:04.222 "dhgroup": "null" 00:19:04.222 } 00:19:04.222 } 00:19:04.222 ]' 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.222 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.482 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.482 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.482 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.482 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.482 12:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.482 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.423 12:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.684 00:19:05.684 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.684 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.684 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.944 { 00:19:05.944 "cntlid": 105, 00:19:05.944 "qid": 0, 00:19:05.944 "state": "enabled", 00:19:05.944 "listen_address": { 00:19:05.944 "trtype": "TCP", 00:19:05.944 "adrfam": "IPv4", 00:19:05.944 "traddr": "10.0.0.2", 00:19:05.944 "trsvcid": "4420" 00:19:05.944 }, 00:19:05.944 "peer_address": { 00:19:05.944 "trtype": "TCP", 00:19:05.944 "adrfam": "IPv4", 00:19:05.944 "traddr": "10.0.0.1", 00:19:05.944 "trsvcid": "57116" 00:19:05.944 }, 00:19:05.944 "auth": { 00:19:05.944 "state": "completed", 00:19:05.944 "digest": "sha512", 00:19:05.944 "dhgroup": "ffdhe2048" 00:19:05.944 } 00:19:05.944 } 00:19:05.944 ]' 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.944 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.205 12:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.778 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.039 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.301 00:19:07.301 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.301 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.301 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.562 { 00:19:07.562 "cntlid": 107, 00:19:07.562 "qid": 0, 00:19:07.562 "state": "enabled", 00:19:07.562 "listen_address": { 00:19:07.562 "trtype": "TCP", 00:19:07.562 "adrfam": "IPv4", 00:19:07.562 "traddr": "10.0.0.2", 00:19:07.562 "trsvcid": "4420" 00:19:07.562 }, 00:19:07.562 "peer_address": { 00:19:07.562 "trtype": "TCP", 00:19:07.562 "adrfam": "IPv4", 00:19:07.562 "traddr": "10.0.0.1", 00:19:07.562 "trsvcid": "60910" 00:19:07.562 }, 00:19:07.562 "auth": { 00:19:07.562 "state": "completed", 00:19:07.562 "digest": "sha512", 00:19:07.562 "dhgroup": "ffdhe2048" 00:19:07.562 } 00:19:07.562 } 00:19:07.562 ]' 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.562 12:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.562 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.562 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.562 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.562 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.562 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.823 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:08.394 12:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.655 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.656 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.656 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.656 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.656 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.916 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.916 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.917 12:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.917 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.917 { 00:19:08.917 "cntlid": 109, 00:19:08.917 "qid": 0, 00:19:08.917 "state": "enabled", 00:19:08.917 "listen_address": { 00:19:08.917 "trtype": "TCP", 00:19:08.917 "adrfam": "IPv4", 00:19:08.917 "traddr": "10.0.0.2", 00:19:08.917 "trsvcid": "4420" 00:19:08.917 }, 00:19:08.917 "peer_address": { 00:19:08.917 "trtype": "TCP", 00:19:08.917 "adrfam": "IPv4", 00:19:08.917 "traddr": "10.0.0.1", 00:19:08.917 "trsvcid": "60942" 00:19:08.917 }, 00:19:08.917 "auth": { 00:19:08.917 "state": "completed", 00:19:08.917 "digest": "sha512", 00:19:08.917 "dhgroup": "ffdhe2048" 00:19:08.917 } 00:19:08.917 } 00:19:08.917 ]' 00:19:08.917 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.177 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.177 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.177 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.178 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.178 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.178 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.178 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.438 12:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:10.007 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.008 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.267 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:10.267 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.267 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.268 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.528 00:19:10.528 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.528 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.528 12:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.528 { 00:19:10.528 "cntlid": 111, 00:19:10.528 "qid": 0, 00:19:10.528 "state": "enabled", 00:19:10.528 "listen_address": { 00:19:10.528 "trtype": "TCP", 00:19:10.528 "adrfam": "IPv4", 00:19:10.528 "traddr": "10.0.0.2", 00:19:10.528 "trsvcid": "4420" 00:19:10.528 }, 00:19:10.528 "peer_address": { 00:19:10.528 "trtype": "TCP", 00:19:10.528 "adrfam": "IPv4", 00:19:10.528 "traddr": "10.0.0.1", 00:19:10.528 "trsvcid": "60970" 00:19:10.528 }, 00:19:10.528 "auth": { 00:19:10.528 "state": "completed", 00:19:10.528 "digest": "sha512", 00:19:10.528 "dhgroup": "ffdhe2048" 00:19:10.528 } 00:19:10.528 } 00:19:10.528 ]' 00:19:10.528 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.788 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.048 12:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.622 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.883 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.143 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.143 { 00:19:12.143 "cntlid": 113, 00:19:12.143 "qid": 0, 00:19:12.143 "state": "enabled", 00:19:12.143 "listen_address": { 00:19:12.143 "trtype": "TCP", 00:19:12.143 "adrfam": "IPv4", 00:19:12.143 "traddr": "10.0.0.2", 00:19:12.143 "trsvcid": "4420" 00:19:12.143 }, 00:19:12.143 "peer_address": { 00:19:12.143 "trtype": "TCP", 00:19:12.143 "adrfam": "IPv4", 00:19:12.143 "traddr": "10.0.0.1", 00:19:12.143 "trsvcid": "60994" 00:19:12.143 }, 00:19:12.143 "auth": { 00:19:12.143 "state": "completed", 00:19:12.143 "digest": "sha512", 00:19:12.143 "dhgroup": "ffdhe3072" 00:19:12.143 } 00:19:12.143 } 00:19:12.143 ]' 00:19:12.143 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.404 12:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.404 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.344 12:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.604 00:19:13.604 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.604 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.604 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.865 { 00:19:13.865 "cntlid": 115, 00:19:13.865 "qid": 0, 00:19:13.865 "state": "enabled", 00:19:13.865 "listen_address": { 00:19:13.865 "trtype": "TCP", 00:19:13.865 "adrfam": "IPv4", 00:19:13.865 "traddr": "10.0.0.2", 00:19:13.865 "trsvcid": "4420" 00:19:13.865 }, 00:19:13.865 "peer_address": { 00:19:13.865 "trtype": "TCP", 00:19:13.865 "adrfam": "IPv4", 00:19:13.865 "traddr": "10.0.0.1", 00:19:13.865 "trsvcid": "32782" 00:19:13.865 }, 00:19:13.865 "auth": { 00:19:13.865 "state": "completed", 00:19:13.865 "digest": "sha512", 00:19:13.865 "dhgroup": "ffdhe3072" 00:19:13.865 } 00:19:13.865 } 00:19:13.865 ]' 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.865 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.125 12:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.065 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.066 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.066 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.354 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.354 { 00:19:15.354 "cntlid": 117, 00:19:15.354 "qid": 0, 00:19:15.354 "state": "enabled", 00:19:15.354 "listen_address": { 00:19:15.354 "trtype": "TCP", 00:19:15.354 "adrfam": "IPv4", 00:19:15.354 "traddr": "10.0.0.2", 00:19:15.354 "trsvcid": "4420" 00:19:15.354 }, 00:19:15.354 "peer_address": { 00:19:15.354 "trtype": "TCP", 00:19:15.354 "adrfam": "IPv4", 00:19:15.354 "traddr": "10.0.0.1", 00:19:15.354 "trsvcid": "32808" 00:19:15.354 }, 00:19:15.354 "auth": { 00:19:15.354 "state": "completed", 00:19:15.354 "digest": "sha512", 00:19:15.354 "dhgroup": "ffdhe3072" 00:19:15.354 } 00:19:15.354 } 00:19:15.354 ]' 00:19:15.354 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.614 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.614 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.614 12:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.614 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.614 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.614 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.614 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.614 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.554 12:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.554 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.814 00:19:16.814 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.814 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.814 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.075 { 00:19:17.075 "cntlid": 119, 00:19:17.075 "qid": 0, 00:19:17.075 "state": "enabled", 00:19:17.075 "listen_address": { 00:19:17.075 "trtype": "TCP", 00:19:17.075 "adrfam": "IPv4", 00:19:17.075 "traddr": "10.0.0.2", 00:19:17.075 "trsvcid": "4420" 00:19:17.075 }, 00:19:17.075 "peer_address": { 00:19:17.075 "trtype": "TCP", 00:19:17.075 "adrfam": "IPv4", 00:19:17.075 "traddr": "10.0.0.1", 00:19:17.075 "trsvcid": "49414" 00:19:17.075 }, 00:19:17.075 "auth": { 00:19:17.075 "state": "completed", 00:19:17.075 "digest": "sha512", 00:19:17.075 "dhgroup": "ffdhe3072" 00:19:17.075 } 00:19:17.075 } 00:19:17.075 ]' 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.075 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.336 12:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.282 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.544 00:19:18.544 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.544 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.544 12:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.544 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.805 { 00:19:18.805 "cntlid": 121, 00:19:18.805 "qid": 0, 00:19:18.805 "state": "enabled", 00:19:18.805 "listen_address": { 00:19:18.805 "trtype": "TCP", 00:19:18.805 "adrfam": "IPv4", 00:19:18.805 "traddr": "10.0.0.2", 00:19:18.805 "trsvcid": "4420" 00:19:18.805 }, 00:19:18.805 "peer_address": { 00:19:18.805 "trtype": "TCP", 00:19:18.805 "adrfam": "IPv4", 00:19:18.805 "traddr": "10.0.0.1", 00:19:18.805 "trsvcid": "49450" 00:19:18.805 }, 00:19:18.805 "auth": { 00:19:18.805 "state": "completed", 00:19:18.805 "digest": "sha512", 00:19:18.805 "dhgroup": "ffdhe4096" 00:19:18.805 } 00:19:18.805 } 00:19:18.805 ]' 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.805 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.066 12:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.637 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.898 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:19.898 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.898 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.898 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.898 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.899 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.158 00:19:20.158 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.159 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.159 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.419 { 00:19:20.419 "cntlid": 123, 00:19:20.419 "qid": 0, 00:19:20.419 "state": "enabled", 00:19:20.419 "listen_address": { 00:19:20.419 "trtype": "TCP", 00:19:20.419 "adrfam": "IPv4", 00:19:20.419 "traddr": "10.0.0.2", 00:19:20.419 "trsvcid": "4420" 00:19:20.419 }, 00:19:20.419 "peer_address": { 00:19:20.419 "trtype": "TCP", 00:19:20.419 "adrfam": "IPv4", 00:19:20.419 "traddr": "10.0.0.1", 00:19:20.419 "trsvcid": "49470" 00:19:20.419 }, 00:19:20.419 "auth": { 00:19:20.419 "state": "completed", 00:19:20.419 "digest": "sha512", 00:19:20.419 "dhgroup": "ffdhe4096" 00:19:20.419 } 00:19:20.419 } 00:19:20.419 ]' 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.419 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.420 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.420 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.420 12:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.680 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.251 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.511 12:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.771 00:19:21.771 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.771 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.771 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.032 { 00:19:22.032 "cntlid": 125, 00:19:22.032 "qid": 0, 00:19:22.032 "state": "enabled", 00:19:22.032 "listen_address": { 00:19:22.032 "trtype": "TCP", 00:19:22.032 "adrfam": "IPv4", 00:19:22.032 "traddr": "10.0.0.2", 00:19:22.032 "trsvcid": "4420" 00:19:22.032 }, 00:19:22.032 "peer_address": { 00:19:22.032 "trtype": "TCP", 00:19:22.032 "adrfam": "IPv4", 00:19:22.032 "traddr": "10.0.0.1", 00:19:22.032 "trsvcid": "49504" 00:19:22.032 }, 00:19:22.032 "auth": { 00:19:22.032 "state": "completed", 00:19:22.032 "digest": "sha512", 00:19:22.032 "dhgroup": "ffdhe4096" 00:19:22.032 } 00:19:22.032 } 00:19:22.032 ]' 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.032 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.292 12:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.862 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.121 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.381 00:19:23.381 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.381 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.381 12:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.642 { 00:19:23.642 "cntlid": 127, 00:19:23.642 "qid": 0, 00:19:23.642 "state": "enabled", 00:19:23.642 "listen_address": { 00:19:23.642 "trtype": "TCP", 00:19:23.642 "adrfam": "IPv4", 00:19:23.642 "traddr": "10.0.0.2", 00:19:23.642 "trsvcid": "4420" 00:19:23.642 }, 00:19:23.642 "peer_address": { 00:19:23.642 "trtype": "TCP", 00:19:23.642 "adrfam": "IPv4", 00:19:23.642 "traddr": "10.0.0.1", 00:19:23.642 "trsvcid": "49530" 00:19:23.642 }, 00:19:23.642 "auth": { 00:19:23.642 "state": "completed", 00:19:23.642 "digest": "sha512", 00:19:23.642 "dhgroup": "ffdhe4096" 00:19:23.642 } 00:19:23.642 } 00:19:23.642 ]' 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.642 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.903 12:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:24.473 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.473 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.473 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.473 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.473 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.734 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.994 00:19:24.994 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.994 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.994 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.254 { 00:19:25.254 "cntlid": 129, 00:19:25.254 "qid": 0, 00:19:25.254 "state": "enabled", 00:19:25.254 "listen_address": { 00:19:25.254 "trtype": "TCP", 00:19:25.254 "adrfam": "IPv4", 00:19:25.254 "traddr": "10.0.0.2", 00:19:25.254 "trsvcid": "4420" 00:19:25.254 }, 00:19:25.254 "peer_address": { 00:19:25.254 "trtype": "TCP", 00:19:25.254 "adrfam": "IPv4", 00:19:25.254 "traddr": "10.0.0.1", 00:19:25.254 "trsvcid": "49566" 00:19:25.254 }, 00:19:25.254 "auth": { 00:19:25.254 "state": "completed", 00:19:25.254 "digest": "sha512", 00:19:25.254 "dhgroup": "ffdhe6144" 00:19:25.254 } 00:19:25.254 } 00:19:25.254 ]' 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.254 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.515 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.515 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.515 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.515 12:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.515 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.456 12:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.718 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.977 { 00:19:26.977 "cntlid": 131, 00:19:26.977 "qid": 0, 00:19:26.977 "state": "enabled", 00:19:26.977 "listen_address": { 00:19:26.977 "trtype": "TCP", 00:19:26.977 "adrfam": "IPv4", 00:19:26.977 "traddr": "10.0.0.2", 00:19:26.977 "trsvcid": "4420" 00:19:26.977 }, 00:19:26.977 "peer_address": { 00:19:26.977 "trtype": "TCP", 00:19:26.977 "adrfam": "IPv4", 00:19:26.977 "traddr": "10.0.0.1", 00:19:26.977 "trsvcid": "45544" 00:19:26.977 }, 00:19:26.977 "auth": { 00:19:26.977 "state": "completed", 00:19:26.977 "digest": "sha512", 00:19:26.977 "dhgroup": "ffdhe6144" 00:19:26.977 } 00:19:26.977 } 00:19:26.977 ]' 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.977 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.237 12:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.177 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.178 12:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.438 00:19:28.438 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.438 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.438 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.697 { 00:19:28.697 "cntlid": 133, 00:19:28.697 "qid": 0, 00:19:28.697 "state": "enabled", 00:19:28.697 "listen_address": { 00:19:28.697 "trtype": "TCP", 00:19:28.697 "adrfam": "IPv4", 00:19:28.697 "traddr": "10.0.0.2", 00:19:28.697 "trsvcid": "4420" 00:19:28.697 }, 00:19:28.697 "peer_address": { 00:19:28.697 "trtype": "TCP", 00:19:28.697 "adrfam": "IPv4", 00:19:28.697 "traddr": "10.0.0.1", 00:19:28.697 "trsvcid": "45578" 00:19:28.697 }, 00:19:28.697 "auth": { 00:19:28.697 "state": "completed", 00:19:28.697 "digest": "sha512", 00:19:28.697 "dhgroup": "ffdhe6144" 00:19:28.697 } 00:19:28.697 } 00:19:28.697 ]' 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.697 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.957 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.957 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.957 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.957 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.957 12:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.903 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.202 00:19:30.202 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.202 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.202 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.478 { 00:19:30.478 "cntlid": 135, 00:19:30.478 "qid": 0, 00:19:30.478 "state": "enabled", 00:19:30.478 "listen_address": { 00:19:30.478 "trtype": "TCP", 00:19:30.478 "adrfam": "IPv4", 00:19:30.478 "traddr": "10.0.0.2", 00:19:30.478 "trsvcid": "4420" 00:19:30.478 }, 00:19:30.478 "peer_address": { 00:19:30.478 "trtype": "TCP", 00:19:30.478 "adrfam": "IPv4", 00:19:30.478 "traddr": "10.0.0.1", 00:19:30.478 "trsvcid": "45604" 00:19:30.478 }, 00:19:30.478 "auth": { 00:19:30.478 "state": "completed", 00:19:30.478 "digest": "sha512", 00:19:30.478 "dhgroup": "ffdhe6144" 00:19:30.478 } 00:19:30.478 } 00:19:30.478 ]' 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.478 12:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.478 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.478 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.478 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.478 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.478 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.740 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.682 12:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.682 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.252 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.252 { 00:19:32.252 "cntlid": 137, 00:19:32.252 "qid": 0, 00:19:32.252 "state": "enabled", 00:19:32.252 "listen_address": { 00:19:32.252 "trtype": "TCP", 00:19:32.252 "adrfam": "IPv4", 00:19:32.252 "traddr": "10.0.0.2", 00:19:32.252 "trsvcid": "4420" 00:19:32.252 }, 00:19:32.252 "peer_address": { 00:19:32.252 "trtype": "TCP", 00:19:32.252 "adrfam": "IPv4", 00:19:32.252 "traddr": "10.0.0.1", 00:19:32.252 "trsvcid": "45638" 00:19:32.252 }, 00:19:32.252 "auth": { 00:19:32.252 "state": "completed", 00:19:32.252 "digest": "sha512", 00:19:32.252 "dhgroup": "ffdhe8192" 00:19:32.252 } 00:19:32.252 } 00:19:32.252 ]' 00:19:32.252 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.512 12:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.512 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.451 12:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.021 00:19:34.021 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.021 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.021 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.281 { 00:19:34.281 "cntlid": 139, 00:19:34.281 "qid": 0, 00:19:34.281 "state": "enabled", 00:19:34.281 "listen_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.2", 00:19:34.281 "trsvcid": "4420" 00:19:34.281 }, 00:19:34.281 "peer_address": { 00:19:34.281 "trtype": "TCP", 00:19:34.281 "adrfam": "IPv4", 00:19:34.281 "traddr": "10.0.0.1", 00:19:34.281 "trsvcid": "45666" 00:19:34.281 }, 00:19:34.281 "auth": { 00:19:34.281 "state": "completed", 00:19:34.281 "digest": "sha512", 00:19:34.281 "dhgroup": "ffdhe8192" 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ]' 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.281 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.541 12:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:ZTI2ZGRjZjI4ZDBlMmJmYmI1NGM1MDU1ZWM4Y2ExNDbVPu4D: --dhchap-ctrl-secret DHHC-1:02:MDRkOWRiMTdmMjg4OWRmYzRlYTI1YTI5YzU4NjZhODQ4ZmVkZDU0YTZiZTM0NTQylO/Suw==: 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.111 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.371 12:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.941 00:19:35.941 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.941 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.941 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.202 { 00:19:36.202 "cntlid": 141, 00:19:36.202 "qid": 0, 00:19:36.202 "state": "enabled", 00:19:36.202 "listen_address": { 00:19:36.202 "trtype": "TCP", 00:19:36.202 "adrfam": "IPv4", 00:19:36.202 "traddr": "10.0.0.2", 00:19:36.202 "trsvcid": "4420" 00:19:36.202 }, 00:19:36.202 "peer_address": { 00:19:36.202 "trtype": "TCP", 00:19:36.202 "adrfam": "IPv4", 00:19:36.202 "traddr": "10.0.0.1", 00:19:36.202 "trsvcid": "45692" 00:19:36.202 }, 00:19:36.202 "auth": { 00:19:36.202 "state": "completed", 00:19:36.202 "digest": "sha512", 00:19:36.202 "dhgroup": "ffdhe8192" 00:19:36.202 } 00:19:36.202 } 00:19:36.202 ]' 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.202 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.463 12:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE3YTg2NzYxYzM5OGJjMDI3YWI5OTljZGZhYzgyMWZhYTAxOGFkZWY0ZTE2MTFht2ZZRQ==: --dhchap-ctrl-secret DHHC-1:01:NzBhMmZlYjI1NmZhNmU5OTJhZTdmZmFkMWVjZjI4MDRH6Grf: 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.033 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.294 12:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.864 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.864 { 00:19:37.864 "cntlid": 143, 00:19:37.864 "qid": 0, 00:19:37.864 "state": "enabled", 00:19:37.864 "listen_address": { 00:19:37.864 "trtype": "TCP", 00:19:37.864 "adrfam": "IPv4", 00:19:37.864 "traddr": "10.0.0.2", 00:19:37.864 "trsvcid": "4420" 00:19:37.864 }, 00:19:37.864 "peer_address": { 00:19:37.864 "trtype": "TCP", 00:19:37.864 "adrfam": "IPv4", 00:19:37.864 "traddr": "10.0.0.1", 00:19:37.864 "trsvcid": "34142" 00:19:37.864 }, 00:19:37.864 "auth": { 00:19:37.864 "state": "completed", 00:19:37.864 "digest": "sha512", 00:19:37.864 "dhgroup": "ffdhe8192" 00:19:37.864 } 00:19:37.864 } 00:19:37.864 ]' 00:19:37.864 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.124 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.125 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.384 12:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.955 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.215 12:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.786 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.786 { 00:19:39.786 "cntlid": 145, 00:19:39.786 "qid": 0, 00:19:39.786 "state": "enabled", 00:19:39.786 "listen_address": { 00:19:39.786 "trtype": "TCP", 00:19:39.786 "adrfam": "IPv4", 00:19:39.786 "traddr": "10.0.0.2", 00:19:39.786 "trsvcid": "4420" 00:19:39.786 }, 00:19:39.786 "peer_address": { 00:19:39.786 "trtype": "TCP", 00:19:39.786 "adrfam": "IPv4", 00:19:39.786 "traddr": "10.0.0.1", 00:19:39.786 "trsvcid": "34180" 00:19:39.786 }, 00:19:39.786 "auth": { 00:19:39.786 "state": "completed", 00:19:39.786 "digest": "sha512", 00:19:39.786 "dhgroup": "ffdhe8192" 00:19:39.786 } 00:19:39.786 } 00:19:39.786 ]' 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.786 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.047 12:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:ODMwZGY3ZWI2NDBlNmU5MzdmMzJkMjYzZjM3YTViNzBhZTVlMGVjM2MyN2M0N2Rjhufl8Q==: --dhchap-ctrl-secret DHHC-1:03:NjhmMDBhZTE3NzkzNDcyYTU1MzZlZjZjMzA5ZmVmMGY2MDRkMWU4NzhiYTNhOGRkNTJiZGEwMGY4YWY4ZTQ4MYGEo64=: 00:19:40.987 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.987 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:40.987 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:40.988 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.248 request: 00:19:41.248 { 00:19:41.248 "name": "nvme0", 00:19:41.248 "trtype": "tcp", 00:19:41.248 "traddr": "10.0.0.2", 00:19:41.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:41.248 "adrfam": "ipv4", 00:19:41.248 "trsvcid": "4420", 00:19:41.248 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.248 "dhchap_key": "key2", 00:19:41.248 "method": "bdev_nvme_attach_controller", 00:19:41.248 "req_id": 1 00:19:41.248 } 00:19:41.248 Got JSON-RPC error response 00:19:41.248 response: 00:19:41.248 { 00:19:41.248 "code": -5, 00:19:41.248 "message": "Input/output error" 00:19:41.248 } 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.248 12:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.820 request: 00:19:41.820 { 00:19:41.820 "name": "nvme0", 00:19:41.820 "trtype": "tcp", 00:19:41.820 "traddr": "10.0.0.2", 00:19:41.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:41.820 "adrfam": "ipv4", 00:19:41.820 "trsvcid": "4420", 00:19:41.820 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.820 "dhchap_key": "key1", 00:19:41.820 "dhchap_ctrlr_key": "ckey2", 00:19:41.820 "method": "bdev_nvme_attach_controller", 00:19:41.820 "req_id": 1 00:19:41.820 } 00:19:41.820 Got JSON-RPC error response 00:19:41.820 response: 00:19:41.820 { 00:19:41.820 "code": -5, 00:19:41.820 "message": "Input/output error" 00:19:41.820 } 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.820 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.389 request: 00:19:42.389 { 00:19:42.389 "name": "nvme0", 00:19:42.389 "trtype": "tcp", 00:19:42.389 "traddr": "10.0.0.2", 00:19:42.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:42.389 "adrfam": "ipv4", 00:19:42.389 "trsvcid": "4420", 00:19:42.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:42.389 "dhchap_key": "key1", 00:19:42.389 "dhchap_ctrlr_key": "ckey1", 00:19:42.389 "method": "bdev_nvme_attach_controller", 00:19:42.389 "req_id": 1 00:19:42.389 } 00:19:42.389 Got JSON-RPC error response 00:19:42.389 response: 00:19:42.389 { 00:19:42.389 "code": -5, 00:19:42.389 "message": "Input/output error" 00:19:42.389 } 00:19:42.389 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 646144 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 646144 ']' 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 646144 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 646144 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 646144' 00:19:42.390 killing process with pid 646144 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 646144 00:19:42.390 12:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 646144 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=671543 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 671543 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 671543 ']' 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:42.649 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.221 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:43.221 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:43.221 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.221 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:43.221 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 671543 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 671543 ']' 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:43.482 12:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.483 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:43.483 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:43.483 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:43.483 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.483 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.744 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.316 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.316 { 00:19:44.316 "cntlid": 1, 00:19:44.316 "qid": 0, 00:19:44.316 "state": "enabled", 00:19:44.316 "listen_address": { 00:19:44.316 "trtype": "TCP", 00:19:44.316 "adrfam": "IPv4", 00:19:44.316 "traddr": "10.0.0.2", 00:19:44.316 "trsvcid": "4420" 00:19:44.316 }, 00:19:44.316 "peer_address": { 00:19:44.316 "trtype": "TCP", 00:19:44.316 "adrfam": "IPv4", 00:19:44.316 "traddr": "10.0.0.1", 00:19:44.316 "trsvcid": "34240" 00:19:44.316 }, 00:19:44.316 "auth": { 00:19:44.316 "state": "completed", 00:19:44.316 "digest": "sha512", 00:19:44.316 "dhgroup": "ffdhe8192" 00:19:44.316 } 00:19:44.316 } 00:19:44.316 ]' 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.316 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.577 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.577 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.577 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.577 12:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.577 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:YzNkZWM1Y2NiNzk1NDY1MGIxM2JjMjUwODY4ZWVkNzhjYjBhOTc4OTk2YzY0NDgwYTZkZjVlNWQxMzVhNmNhZe1boYM=: 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:45.552 12:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.552 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.813 request: 00:19:45.813 { 00:19:45.813 "name": "nvme0", 00:19:45.813 "trtype": "tcp", 00:19:45.813 "traddr": "10.0.0.2", 00:19:45.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:45.813 "adrfam": "ipv4", 00:19:45.813 "trsvcid": "4420", 00:19:45.813 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.813 "dhchap_key": "key3", 00:19:45.813 "method": "bdev_nvme_attach_controller", 00:19:45.813 "req_id": 1 00:19:45.813 } 00:19:45.813 Got JSON-RPC error response 00:19:45.813 response: 00:19:45.813 { 00:19:45.813 "code": -5, 00:19:45.813 "message": "Input/output error" 00:19:45.813 } 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.813 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.074 request: 00:19:46.074 { 00:19:46.074 "name": "nvme0", 00:19:46.074 "trtype": "tcp", 00:19:46.074 "traddr": "10.0.0.2", 00:19:46.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:46.074 "adrfam": "ipv4", 00:19:46.074 "trsvcid": "4420", 00:19:46.074 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.074 "dhchap_key": "key3", 00:19:46.074 "method": "bdev_nvme_attach_controller", 00:19:46.074 "req_id": 1 00:19:46.074 } 00:19:46.074 Got JSON-RPC error response 00:19:46.074 response: 00:19:46.074 { 00:19:46.074 "code": -5, 00:19:46.074 "message": "Input/output error" 00:19:46.074 } 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.074 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.335 request: 00:19:46.335 { 00:19:46.335 "name": "nvme0", 00:19:46.335 "trtype": "tcp", 00:19:46.335 "traddr": "10.0.0.2", 00:19:46.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:46.335 "adrfam": "ipv4", 00:19:46.335 "trsvcid": "4420", 00:19:46.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.335 "dhchap_key": "key0", 00:19:46.335 "dhchap_ctrlr_key": "key1", 00:19:46.335 "method": "bdev_nvme_attach_controller", 00:19:46.335 "req_id": 1 00:19:46.335 } 00:19:46.335 Got JSON-RPC error response 00:19:46.335 response: 00:19:46.335 { 00:19:46.335 "code": -5, 00:19:46.335 "message": "Input/output error" 00:19:46.335 } 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:46.335 12:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:46.596 00:19:46.596 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:46.596 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:46.596 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 646295 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 646295 ']' 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 646295 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:46.856 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 646295 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 646295' 00:19:47.116 killing process with pid 646295 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 646295 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 646295 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.116 rmmod nvme_tcp 00:19:47.116 rmmod nvme_fabrics 00:19:47.116 rmmod nvme_keyring 00:19:47.116 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 671543 ']' 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 671543 ']' 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 671543' 00:19:47.377 killing process with pid 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 671543 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.377 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.378 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.378 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.378 12:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.378 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.378 12:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.926 12:23:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.926 12:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yKs /tmp/spdk.key-sha256.sjl /tmp/spdk.key-sha384.blL /tmp/spdk.key-sha512.yJ1 /tmp/spdk.key-sha512.Hrp /tmp/spdk.key-sha384.r5z /tmp/spdk.key-sha256.JHa '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:49.926 00:19:49.926 real 2m21.065s 00:19:49.926 user 5m12.459s 00:19:49.926 sys 0m19.388s 00:19:49.926 12:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:49.926 12:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.926 ************************************ 00:19:49.926 END TEST nvmf_auth_target 00:19:49.926 ************************************ 00:19:49.926 12:23:55 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:49.926 12:23:55 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:49.926 12:23:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:49.926 12:23:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:49.926 12:23:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.926 ************************************ 00:19:49.926 START TEST nvmf_bdevio_no_huge 00:19:49.926 ************************************ 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:49.926 * Looking for test storage... 00:19:49.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.926 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.927 12:23:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.072 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:58.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:58.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:58.073 Found net devices under 0000:31:00.0: cvl_0_0 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:58.073 Found net devices under 0000:31:00.1: cvl_0_1 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:19:58.073 00:19:58.073 --- 10.0.0.2 ping statistics --- 00:19:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.073 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:19:58.073 00:19:58.073 --- 10.0.0.1 ping statistics --- 00:19:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.073 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=677385 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 677385 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 677385 ']' 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:58.073 12:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.073 [2024-06-10 12:24:03.521936] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:19:58.073 [2024-06-10 12:24:03.521990] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:58.073 [2024-06-10 12:24:03.605039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.334 [2024-06-10 12:24:03.711300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.334 [2024-06-10 12:24:03.711349] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.334 [2024-06-10 12:24:03.711357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.334 [2024-06-10 12:24:03.711364] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.334 [2024-06-10 12:24:03.711370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.334 [2024-06-10 12:24:03.711549] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:58.334 [2024-06-10 12:24:03.711686] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:19:58.334 [2024-06-10 12:24:03.711844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.334 [2024-06-10 12:24:03.711844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 [2024-06-10 12:24:04.376603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 Malloc0 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.906 [2024-06-10 12:24:04.430127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.906 { 00:19:58.906 "params": { 00:19:58.906 "name": "Nvme$subsystem", 00:19:58.906 "trtype": "$TEST_TRANSPORT", 00:19:58.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.906 "adrfam": "ipv4", 00:19:58.906 "trsvcid": "$NVMF_PORT", 00:19:58.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.906 "hdgst": ${hdgst:-false}, 00:19:58.906 "ddgst": ${ddgst:-false} 00:19:58.906 }, 00:19:58.906 "method": "bdev_nvme_attach_controller" 00:19:58.906 } 00:19:58.906 EOF 00:19:58.906 )") 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:58.906 12:24:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:58.906 "params": { 00:19:58.906 "name": "Nvme1", 00:19:58.906 "trtype": "tcp", 00:19:58.906 "traddr": "10.0.0.2", 00:19:58.906 "adrfam": "ipv4", 00:19:58.906 "trsvcid": "4420", 00:19:58.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.907 "hdgst": false, 00:19:58.907 "ddgst": false 00:19:58.907 }, 00:19:58.907 "method": "bdev_nvme_attach_controller" 00:19:58.907 }' 00:19:58.907 [2024-06-10 12:24:04.486344] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:19:58.907 [2024-06-10 12:24:04.486406] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid677605 ] 00:19:59.167 [2024-06-10 12:24:04.559754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.167 [2024-06-10 12:24:04.656524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.167 [2024-06-10 12:24:04.656641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.167 [2024-06-10 12:24:04.656644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.426 I/O targets: 00:19:59.426 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:59.426 00:19:59.426 00:19:59.426 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.426 http://cunit.sourceforge.net/ 00:19:59.426 00:19:59.426 00:19:59.426 Suite: bdevio tests on: Nvme1n1 00:19:59.426 Test: blockdev write read block ...passed 00:19:59.426 Test: blockdev write zeroes read block ...passed 00:19:59.426 Test: blockdev write zeroes read no split ...passed 00:19:59.686 Test: blockdev write zeroes read split ...passed 00:19:59.686 Test: blockdev write zeroes read split partial ...passed 00:19:59.686 Test: blockdev reset ...[2024-06-10 12:24:05.134432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.686 [2024-06-10 12:24:05.134487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc9900 (9): Bad file descriptor 00:19:59.686 [2024-06-10 12:24:05.189620] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.686 passed 00:19:59.686 Test: blockdev write read 8 blocks ...passed 00:19:59.686 Test: blockdev write read size > 128k ...passed 00:19:59.686 Test: blockdev write read invalid size ...passed 00:19:59.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.686 Test: blockdev write read max offset ...passed 00:19:59.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.947 Test: blockdev writev readv 8 blocks ...passed 00:19:59.947 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.947 Test: blockdev writev readv block ...passed 00:19:59.947 Test: blockdev writev readv size > 128k ...passed 00:19:59.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.947 Test: blockdev comparev and writev ...[2024-06-10 12:24:05.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.494869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.494881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.494887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.495368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.495376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.495391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.495896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.495903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.495913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.495918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.496379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.496387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:59.947 [2024-06-10 12:24:05.496397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.947 [2024-06-10 12:24:05.496402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:59.947 passed 00:20:00.207 Test: blockdev nvme passthru rw ...passed 00:20:00.207 Test: blockdev nvme passthru vendor specific ...[2024-06-10 12:24:05.582040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:00.207 [2024-06-10 12:24:05.582050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:00.207 [2024-06-10 12:24:05.582467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:00.207 [2024-06-10 12:24:05.582474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:00.207 [2024-06-10 12:24:05.582815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:00.207 [2024-06-10 12:24:05.582822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:00.207 [2024-06-10 12:24:05.583192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:00.207 [2024-06-10 12:24:05.583202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:00.207 passed 00:20:00.207 Test: blockdev nvme admin passthru ...passed 00:20:00.207 Test: blockdev copy ...passed 00:20:00.207 00:20:00.207 Run Summary: Type Total Ran Passed Failed Inactive 00:20:00.207 suites 1 1 n/a 0 0 00:20:00.207 tests 23 23 23 0 0 00:20:00.207 asserts 152 152 152 0 n/a 00:20:00.208 00:20:00.208 Elapsed time = 1.449 seconds 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.468 rmmod nvme_tcp 00:20:00.468 rmmod nvme_fabrics 00:20:00.468 rmmod nvme_keyring 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 677385 ']' 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 677385 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 677385 ']' 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 677385 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:00.468 12:24:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 677385 00:20:00.468 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:20:00.468 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:20:00.468 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 677385' 00:20:00.468 killing process with pid 677385 00:20:00.468 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 677385 00:20:00.468 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 677385 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.728 12:24:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.277 12:24:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:03.277 00:20:03.277 real 0m13.312s 00:20:03.277 user 0m15.085s 00:20:03.277 sys 0m7.088s 00:20:03.277 12:24:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:03.277 12:24:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 ************************************ 00:20:03.277 END TEST nvmf_bdevio_no_huge 00:20:03.277 ************************************ 00:20:03.277 12:24:08 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:03.277 12:24:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:03.277 12:24:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:03.277 12:24:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 ************************************ 00:20:03.277 START TEST nvmf_tls 00:20:03.277 ************************************ 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:03.277 * Looking for test storage... 00:20:03.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.277 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.278 12:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.415 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:11.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:11.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:11.416 Found net devices under 0000:31:00.0: cvl_0_0 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:11.416 Found net devices under 0000:31:00.1: cvl_0_1 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:20:11.416 00:20:11.416 --- 10.0.0.2 ping statistics --- 00:20:11.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.416 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:20:11.416 00:20:11.416 --- 10.0.0.1 ping statistics --- 00:20:11.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.416 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=683018 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 683018 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 683018 ']' 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:11.416 12:24:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.416 [2024-06-10 12:24:16.804924] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:11.416 [2024-06-10 12:24:16.804990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.416 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.416 [2024-06-10 12:24:16.901291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.416 [2024-06-10 12:24:16.994811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.417 [2024-06-10 12:24:16.994871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.417 [2024-06-10 12:24:16.994879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.417 [2024-06-10 12:24:16.994886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.417 [2024-06-10 12:24:16.994893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.417 [2024-06-10 12:24:16.994919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:12.057 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:12.318 true 00:20:12.318 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.318 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:12.578 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:12.578 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:12.578 12:24:17 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.578 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.578 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:12.840 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:12.840 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:12.840 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.100 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:13.361 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:13.361 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:13.361 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:13.361 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.361 12:24:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:13.622 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:13.622 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:13.622 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:13.883 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3WFAGCT3FB 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.muRksRrqZt 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3WFAGCT3FB 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.muRksRrqZt 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:14.144 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:14.404 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3WFAGCT3FB 00:20:14.404 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3WFAGCT3FB 00:20:14.404 12:24:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.664 [2024-06-10 12:24:20.074371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.664 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.664 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.925 [2024-06-10 12:24:20.351039] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.925 [2024-06-10 12:24:20.351207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.925 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.925 malloc0 00:20:14.925 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.184 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WFAGCT3FB 00:20:15.445 [2024-06-10 12:24:20.793984] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.445 12:24:20 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3WFAGCT3FB 00:20:15.445 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.437 Initializing NVMe Controllers 00:20:25.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:25.437 Initialization complete. Launching workers. 00:20:25.437 ======================================================== 00:20:25.437 Latency(us) 00:20:25.437 Device Information : IOPS MiB/s Average min max 00:20:25.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19073.37 74.51 3355.48 1281.90 4214.41 00:20:25.437 ======================================================== 00:20:25.437 Total : 19073.37 74.51 3355.48 1281.90 4214.41 00:20:25.437 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WFAGCT3FB 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3WFAGCT3FB' 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=685849 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 685849 /var/tmp/bdevperf.sock 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.437 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 685849 ']' 00:20:25.438 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.438 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:25.438 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.438 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:25.438 12:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.438 [2024-06-10 12:24:30.950706] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:25.438 [2024-06-10 12:24:30.950763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685849 ] 00:20:25.438 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.438 [2024-06-10 12:24:31.005185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.697 [2024-06-10 12:24:31.057886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.267 12:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:26.267 12:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:26.267 12:24:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WFAGCT3FB 00:20:26.267 [2024-06-10 12:24:31.865883] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.267 [2024-06-10 12:24:31.865940] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.528 TLSTESTn1 00:20:26.528 12:24:31 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.528 Running I/O for 10 seconds... 00:20:36.522 00:20:36.522 Latency(us) 00:20:36.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.522 Verification LBA range: start 0x0 length 0x2000 00:20:36.522 TLSTESTn1 : 10.02 6133.23 23.96 0.00 0.00 20835.53 5761.71 31457.28 00:20:36.522 =================================================================================================================== 00:20:36.522 Total : 6133.23 23.96 0.00 0.00 20835.53 5761.71 31457.28 00:20:36.522 0 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 685849 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 685849 ']' 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 685849 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:36.522 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 685849 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 685849' 00:20:36.782 killing process with pid 685849 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 685849 00:20:36.782 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.782 00:20:36.782 Latency(us) 00:20:36.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.782 =================================================================================================================== 00:20:36.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.782 [2024-06-10 12:24:42.159656] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 685849 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.muRksRrqZt 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.muRksRrqZt 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.muRksRrqZt 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.muRksRrqZt' 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=687949 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 687949 /var/tmp/bdevperf.sock 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 687949 ']' 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:36.782 12:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.782 [2024-06-10 12:24:42.332179] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:36.782 [2024-06-10 12:24:42.332241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687949 ] 00:20:36.782 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.782 [2024-06-10 12:24:42.386620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.042 [2024-06-10 12:24:42.438617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.611 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:37.611 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:37.612 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.muRksRrqZt 00:20:37.871 [2024-06-10 12:24:43.218802] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.871 [2024-06-10 12:24:43.218860] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.871 [2024-06-10 12:24:43.223842] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.871 [2024-06-10 12:24:43.223867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2051880 (107): Transport endpoint is not connected 00:20:37.871 [2024-06-10 12:24:43.224850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2051880 (9): Bad file descriptor 00:20:37.871 [2024-06-10 12:24:43.225851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.871 [2024-06-10 12:24:43.225858] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.871 [2024-06-10 12:24:43.225865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.871 request: 00:20:37.871 { 00:20:37.871 "name": "TLSTEST", 00:20:37.871 "trtype": "tcp", 00:20:37.871 "traddr": "10.0.0.2", 00:20:37.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.871 "adrfam": "ipv4", 00:20:37.871 "trsvcid": "4420", 00:20:37.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.871 "psk": "/tmp/tmp.muRksRrqZt", 00:20:37.871 "method": "bdev_nvme_attach_controller", 00:20:37.872 "req_id": 1 00:20:37.872 } 00:20:37.872 Got JSON-RPC error response 00:20:37.872 response: 00:20:37.872 { 00:20:37.872 "code": -5, 00:20:37.872 "message": "Input/output error" 00:20:37.872 } 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 687949 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 687949 ']' 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 687949 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 687949 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 687949' 00:20:37.872 killing process with pid 687949 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 687949 00:20:37.872 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.872 00:20:37.872 Latency(us) 00:20:37.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.872 =================================================================================================================== 00:20:37.872 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.872 [2024-06-10 12:24:43.294216] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 687949 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3WFAGCT3FB 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3WFAGCT3FB 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3WFAGCT3FB 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3WFAGCT3FB' 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688283 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688283 /var/tmp/bdevperf.sock 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 688283 ']' 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:37.872 12:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.872 [2024-06-10 12:24:43.459817] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:37.872 [2024-06-10 12:24:43.459871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688283 ] 00:20:38.132 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.132 [2024-06-10 12:24:43.515972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.132 [2024-06-10 12:24:43.567265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.701 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:38.701 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:38.701 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3WFAGCT3FB 00:20:38.962 [2024-06-10 12:24:44.355171] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.962 [2024-06-10 12:24:44.355239] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:38.962 [2024-06-10 12:24:44.366536] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:38.962 [2024-06-10 12:24:44.366554] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:38.962 [2024-06-10 12:24:44.366574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.962 [2024-06-10 12:24:44.367204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ba880 (107): Transport endpoint is not connected 00:20:38.962 [2024-06-10 12:24:44.368200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ba880 (9): Bad file descriptor 00:20:38.962 [2024-06-10 12:24:44.369202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-06-10 12:24:44.369213] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.962 [2024-06-10 12:24:44.369220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 request: 00:20:38.962 { 00:20:38.962 "name": "TLSTEST", 00:20:38.962 "trtype": "tcp", 00:20:38.962 "traddr": "10.0.0.2", 00:20:38.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.962 "adrfam": "ipv4", 00:20:38.962 "trsvcid": "4420", 00:20:38.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.962 "psk": "/tmp/tmp.3WFAGCT3FB", 00:20:38.962 "method": "bdev_nvme_attach_controller", 00:20:38.962 "req_id": 1 00:20:38.962 } 00:20:38.962 Got JSON-RPC error response 00:20:38.962 response: 00:20:38.962 { 00:20:38.962 "code": -5, 00:20:38.962 "message": "Input/output error" 00:20:38.962 } 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 688283 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 688283 ']' 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 688283 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 688283 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 688283' 00:20:38.962 killing process with pid 688283 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 688283 00:20:38.962 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.962 00:20:38.962 Latency(us) 00:20:38.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.962 =================================================================================================================== 00:20:38.962 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.962 [2024-06-10 12:24:44.452798] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 688283 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WFAGCT3FB 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WFAGCT3FB 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WFAGCT3FB 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3WFAGCT3FB' 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688440 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688440 /var/tmp/bdevperf.sock 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 688440 ']' 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:38.962 12:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.223 [2024-06-10 12:24:44.608840] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:39.223 [2024-06-10 12:24:44.608896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688440 ] 00:20:39.223 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.223 [2024-06-10 12:24:44.665056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.223 [2024-06-10 12:24:44.717181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.802 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:39.802 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:39.802 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WFAGCT3FB 00:20:40.128 [2024-06-10 12:24:45.525417] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.128 [2024-06-10 12:24:45.525484] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.128 [2024-06-10 12:24:45.529884] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:40.128 [2024-06-10 12:24:45.529900] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:40.128 [2024-06-10 12:24:45.529919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:40.128 [2024-06-10 12:24:45.530584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b52880 (107): Transport endpoint is not connected 00:20:40.128 [2024-06-10 12:24:45.531579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b52880 (9): Bad file descriptor 00:20:40.128 [2024-06-10 12:24:45.532580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:40.128 [2024-06-10 12:24:45.532587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:40.128 [2024-06-10 12:24:45.532595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:40.128 request: 00:20:40.128 { 00:20:40.128 "name": "TLSTEST", 00:20:40.128 "trtype": "tcp", 00:20:40.128 "traddr": "10.0.0.2", 00:20:40.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.128 "adrfam": "ipv4", 00:20:40.128 "trsvcid": "4420", 00:20:40.128 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.128 "psk": "/tmp/tmp.3WFAGCT3FB", 00:20:40.128 "method": "bdev_nvme_attach_controller", 00:20:40.128 "req_id": 1 00:20:40.128 } 00:20:40.128 Got JSON-RPC error response 00:20:40.128 response: 00:20:40.128 { 00:20:40.128 "code": -5, 00:20:40.128 "message": "Input/output error" 00:20:40.128 } 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 688440 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 688440 ']' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 688440 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 688440 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 688440' 00:20:40.128 killing process with pid 688440 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 688440 00:20:40.128 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.128 00:20:40.128 Latency(us) 00:20:40.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.128 =================================================================================================================== 00:20:40.128 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.128 [2024-06-10 12:24:45.614143] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 688440 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688645 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688645 /var/tmp/bdevperf.sock 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 688645 ']' 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:40.128 12:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.388 [2024-06-10 12:24:45.768357] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:40.388 [2024-06-10 12:24:45.768407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688645 ] 00:20:40.388 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.388 [2024-06-10 12:24:45.823835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.388 [2024-06-10 12:24:45.873861] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.960 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:40.960 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:40.960 12:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:41.221 [2024-06-10 12:24:46.692695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.221 [2024-06-10 12:24:46.694519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf51e0 (9): Bad file descriptor 00:20:41.221 [2024-06-10 12:24:46.695518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.221 [2024-06-10 12:24:46.695525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.221 [2024-06-10 12:24:46.695532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.221 request: 00:20:41.221 { 00:20:41.221 "name": "TLSTEST", 00:20:41.221 "trtype": "tcp", 00:20:41.221 "traddr": "10.0.0.2", 00:20:41.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.221 "adrfam": "ipv4", 00:20:41.221 "trsvcid": "4420", 00:20:41.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.221 "method": "bdev_nvme_attach_controller", 00:20:41.221 "req_id": 1 00:20:41.221 } 00:20:41.221 Got JSON-RPC error response 00:20:41.221 response: 00:20:41.221 { 00:20:41.221 "code": -5, 00:20:41.221 "message": "Input/output error" 00:20:41.221 } 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 688645 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 688645 ']' 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 688645 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 688645 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 688645' 00:20:41.221 killing process with pid 688645 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 688645 00:20:41.221 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.221 00:20:41.221 Latency(us) 00:20:41.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.221 =================================================================================================================== 00:20:41.221 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.221 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 688645 00:20:41.480 12:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 683018 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 683018 ']' 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 683018 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 683018 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 683018' 00:20:41.481 killing process with pid 683018 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 683018 00:20:41.481 [2024-06-10 12:24:46.939289] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:41.481 12:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 683018 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:41.481 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.pDBy2MD2Qr 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.pDBy2MD2Qr 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=689000 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 689000 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 689000 ']' 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:41.741 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.741 [2024-06-10 12:24:47.176153] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:41.741 [2024-06-10 12:24:47.176208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.741 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.741 [2024-06-10 12:24:47.263309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.741 [2024-06-10 12:24:47.315322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.741 [2024-06-10 12:24:47.315353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.741 [2024-06-10 12:24:47.315359] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.741 [2024-06-10 12:24:47.315364] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.742 [2024-06-10 12:24:47.315368] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.742 [2024-06-10 12:24:47.315384] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.684 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:42.684 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:42.684 12:24:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.684 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:42.684 12:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.684 12:24:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.684 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:20:42.684 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pDBy2MD2Qr 00:20:42.684 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.684 [2024-06-10 12:24:48.152588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.684 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.945 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.945 [2024-06-10 12:24:48.449308] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.945 [2024-06-10 12:24:48.449488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.945 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.206 malloc0 00:20:43.206 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:43.206 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:20:43.467 [2024-06-10 12:24:48.916298] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDBy2MD2Qr 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pDBy2MD2Qr' 00:20:43.467 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=689366 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 689366 /var/tmp/bdevperf.sock 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 689366 ']' 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:43.468 12:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.468 [2024-06-10 12:24:48.961837] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:43.468 [2024-06-10 12:24:48.961882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689366 ] 00:20:43.468 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.468 [2024-06-10 12:24:49.016574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.468 [2024-06-10 12:24:49.069213] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.729 12:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:43.729 12:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:43.729 12:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:20:43.729 [2024-06-10 12:24:49.283633] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.729 [2024-06-10 12:24:49.283689] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.990 TLSTESTn1 00:20:43.990 12:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.990 Running I/O for 10 seconds... 00:20:53.994 00:20:53.994 Latency(us) 00:20:53.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.994 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:53.994 Verification LBA range: start 0x0 length 0x2000 00:20:53.994 TLSTESTn1 : 10.02 4729.56 18.47 0.00 0.00 27030.28 4560.21 73400.32 00:20:53.994 =================================================================================================================== 00:20:53.994 Total : 4729.56 18.47 0.00 0.00 27030.28 4560.21 73400.32 00:20:53.994 0 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 689366 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 689366 ']' 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 689366 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 689366 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 689366' 00:20:53.994 killing process with pid 689366 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 689366 00:20:53.994 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.994 00:20:53.994 Latency(us) 00:20:53.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.994 =================================================================================================================== 00:20:53.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.994 [2024-06-10 12:24:59.572462] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:53.994 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 689366 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.pDBy2MD2Qr 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDBy2MD2Qr 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDBy2MD2Qr 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDBy2MD2Qr 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pDBy2MD2Qr' 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=691372 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 691372 /var/tmp/bdevperf.sock 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 691372 ']' 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:54.254 12:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.255 [2024-06-10 12:24:59.742551] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:54.255 [2024-06-10 12:24:59.742604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691372 ] 00:20:54.255 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.255 [2024-06-10 12:24:59.798604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.255 [2024-06-10 12:24:59.849307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:20:55.197 [2024-06-10 12:25:00.657396] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.197 [2024-06-10 12:25:00.657439] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:55.197 [2024-06-10 12:25:00.657444] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.pDBy2MD2Qr 00:20:55.197 request: 00:20:55.197 { 00:20:55.197 "name": "TLSTEST", 00:20:55.197 "trtype": "tcp", 00:20:55.197 "traddr": "10.0.0.2", 00:20:55.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.197 "adrfam": "ipv4", 00:20:55.197 "trsvcid": "4420", 00:20:55.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.197 "psk": "/tmp/tmp.pDBy2MD2Qr", 00:20:55.197 "method": "bdev_nvme_attach_controller", 00:20:55.197 "req_id": 1 00:20:55.197 } 00:20:55.197 Got JSON-RPC error response 00:20:55.197 response: 00:20:55.197 { 00:20:55.197 "code": -1, 00:20:55.197 "message": "Operation not permitted" 00:20:55.197 } 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 691372 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 691372 ']' 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 691372 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 691372 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:55.197 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:55.198 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 691372' 00:20:55.198 killing process with pid 691372 00:20:55.198 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 691372 00:20:55.198 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.198 00:20:55.198 Latency(us) 00:20:55.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.198 =================================================================================================================== 00:20:55.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.198 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 691372 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 689000 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 689000 ']' 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 689000 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 689000 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 689000' 00:20:55.459 killing process with pid 689000 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 689000 00:20:55.459 [2024-06-10 12:25:00.903189] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:55.459 12:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 689000 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=691716 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 691716 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 691716 ']' 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:55.459 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.720 [2024-06-10 12:25:01.083061] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:55.720 [2024-06-10 12:25:01.083115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.720 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.720 [2024-06-10 12:25:01.170007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.720 [2024-06-10 12:25:01.230162] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.720 [2024-06-10 12:25:01.230200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.720 [2024-06-10 12:25:01.230205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.720 [2024-06-10 12:25:01.230210] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.720 [2024-06-10 12:25:01.230214] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.720 [2024-06-10 12:25:01.230234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pDBy2MD2Qr 00:20:56.292 12:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.554 [2024-06-10 12:25:02.024910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.554 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.815 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.815 [2024-06-10 12:25:02.317619] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.815 [2024-06-10 12:25:02.317785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.815 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:57.076 malloc0 00:20:57.076 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.076 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:20:57.337 [2024-06-10 12:25:02.740255] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:57.337 [2024-06-10 12:25:02.740273] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:57.337 [2024-06-10 12:25:02.740292] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:57.337 request: 00:20:57.337 { 00:20:57.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.337 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.337 "psk": "/tmp/tmp.pDBy2MD2Qr", 00:20:57.337 "method": "nvmf_subsystem_add_host", 00:20:57.337 "req_id": 1 00:20:57.337 } 00:20:57.337 Got JSON-RPC error response 00:20:57.337 response: 00:20:57.337 { 00:20:57.337 "code": -32603, 00:20:57.337 "message": "Internal error" 00:20:57.337 } 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 691716 ']' 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 691716' 00:20:57.337 killing process with pid 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 691716 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.pDBy2MD2Qr 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:57.337 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=692090 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 692090 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 692090 ']' 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:57.598 12:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.598 [2024-06-10 12:25:02.993459] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:57.598 [2024-06-10 12:25:02.993516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.598 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.598 [2024-06-10 12:25:03.082609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.598 [2024-06-10 12:25:03.136426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.598 [2024-06-10 12:25:03.136455] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.598 [2024-06-10 12:25:03.136460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.598 [2024-06-10 12:25:03.136465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.598 [2024-06-10 12:25:03.136469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.598 [2024-06-10 12:25:03.136482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.171 12:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:58.171 12:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:58.171 12:25:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.171 12:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:58.171 12:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.432 12:25:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.432 12:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:20:58.432 12:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pDBy2MD2Qr 00:20:58.432 12:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.432 [2024-06-10 12:25:03.929491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.432 12:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.693 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:58.693 [2024-06-10 12:25:04.234232] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.693 [2024-06-10 12:25:04.234405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.693 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.954 malloc0 00:20:58.954 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.954 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:20:59.216 [2024-06-10 12:25:04.693204] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=692452 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 692452 /var/tmp/bdevperf.sock 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 692452 ']' 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:59.216 12:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.216 [2024-06-10 12:25:04.754369] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:20:59.216 [2024-06-10 12:25:04.754418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692452 ] 00:20:59.216 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.216 [2024-06-10 12:25:04.809901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.477 [2024-06-10 12:25:04.862038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.051 12:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:00.051 12:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:00.051 12:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:21:00.051 [2024-06-10 12:25:05.634090] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.051 [2024-06-10 12:25:05.634146] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.313 TLSTESTn1 00:21:00.313 12:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:00.575 12:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:00.575 "subsystems": [ 00:21:00.575 { 00:21:00.575 "subsystem": "keyring", 00:21:00.575 "config": [] 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "subsystem": "iobuf", 00:21:00.575 "config": [ 00:21:00.575 { 00:21:00.575 "method": "iobuf_set_options", 00:21:00.575 "params": { 00:21:00.575 "small_pool_count": 8192, 00:21:00.575 "large_pool_count": 1024, 00:21:00.575 "small_bufsize": 8192, 00:21:00.575 "large_bufsize": 135168 00:21:00.575 } 00:21:00.575 } 00:21:00.575 ] 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "subsystem": "sock", 00:21:00.575 "config": [ 00:21:00.575 { 00:21:00.575 "method": "sock_set_default_impl", 00:21:00.575 "params": { 00:21:00.575 "impl_name": "posix" 00:21:00.575 } 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "method": "sock_impl_set_options", 00:21:00.575 "params": { 00:21:00.575 "impl_name": "ssl", 00:21:00.575 "recv_buf_size": 4096, 00:21:00.575 "send_buf_size": 4096, 00:21:00.575 "enable_recv_pipe": true, 00:21:00.575 "enable_quickack": false, 00:21:00.575 "enable_placement_id": 0, 00:21:00.575 "enable_zerocopy_send_server": true, 00:21:00.575 "enable_zerocopy_send_client": false, 00:21:00.575 "zerocopy_threshold": 0, 00:21:00.575 "tls_version": 0, 00:21:00.575 "enable_ktls": false 00:21:00.575 } 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "method": "sock_impl_set_options", 00:21:00.575 "params": { 00:21:00.575 "impl_name": "posix", 00:21:00.575 "recv_buf_size": 2097152, 00:21:00.575 "send_buf_size": 2097152, 00:21:00.575 "enable_recv_pipe": true, 00:21:00.575 "enable_quickack": false, 00:21:00.575 "enable_placement_id": 0, 00:21:00.575 "enable_zerocopy_send_server": true, 00:21:00.575 "enable_zerocopy_send_client": false, 00:21:00.575 "zerocopy_threshold": 0, 00:21:00.575 "tls_version": 0, 00:21:00.575 "enable_ktls": false 00:21:00.575 } 00:21:00.575 } 00:21:00.575 ] 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "subsystem": "vmd", 00:21:00.575 "config": [] 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "subsystem": "accel", 00:21:00.575 "config": [ 00:21:00.575 { 00:21:00.575 "method": "accel_set_options", 00:21:00.575 "params": { 00:21:00.575 "small_cache_size": 128, 00:21:00.575 "large_cache_size": 16, 00:21:00.575 "task_count": 2048, 00:21:00.575 "sequence_count": 2048, 00:21:00.575 "buf_count": 2048 00:21:00.575 } 00:21:00.575 } 00:21:00.575 ] 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "subsystem": "bdev", 00:21:00.575 "config": [ 00:21:00.575 { 00:21:00.575 "method": "bdev_set_options", 00:21:00.575 "params": { 00:21:00.575 "bdev_io_pool_size": 65535, 00:21:00.575 "bdev_io_cache_size": 256, 00:21:00.575 "bdev_auto_examine": true, 00:21:00.575 "iobuf_small_cache_size": 128, 00:21:00.575 "iobuf_large_cache_size": 16 00:21:00.575 } 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "method": "bdev_raid_set_options", 00:21:00.575 "params": { 00:21:00.575 "process_window_size_kb": 1024 00:21:00.575 } 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "method": "bdev_iscsi_set_options", 00:21:00.575 "params": { 00:21:00.575 "timeout_sec": 30 00:21:00.575 } 00:21:00.575 }, 00:21:00.575 { 00:21:00.575 "method": "bdev_nvme_set_options", 00:21:00.575 "params": { 00:21:00.575 "action_on_timeout": "none", 00:21:00.575 "timeout_us": 0, 00:21:00.576 "timeout_admin_us": 0, 00:21:00.576 "keep_alive_timeout_ms": 10000, 00:21:00.576 "arbitration_burst": 0, 00:21:00.576 "low_priority_weight": 0, 00:21:00.576 "medium_priority_weight": 0, 00:21:00.576 "high_priority_weight": 0, 00:21:00.576 "nvme_adminq_poll_period_us": 10000, 00:21:00.576 "nvme_ioq_poll_period_us": 0, 00:21:00.576 "io_queue_requests": 0, 00:21:00.576 "delay_cmd_submit": true, 00:21:00.576 "transport_retry_count": 4, 00:21:00.576 "bdev_retry_count": 3, 00:21:00.576 "transport_ack_timeout": 0, 00:21:00.576 "ctrlr_loss_timeout_sec": 0, 00:21:00.576 "reconnect_delay_sec": 0, 00:21:00.576 "fast_io_fail_timeout_sec": 0, 00:21:00.576 "disable_auto_failback": false, 00:21:00.576 "generate_uuids": false, 00:21:00.576 "transport_tos": 0, 00:21:00.576 "nvme_error_stat": false, 00:21:00.576 "rdma_srq_size": 0, 00:21:00.576 "io_path_stat": false, 00:21:00.576 "allow_accel_sequence": false, 00:21:00.576 "rdma_max_cq_size": 0, 00:21:00.576 "rdma_cm_event_timeout_ms": 0, 00:21:00.576 "dhchap_digests": [ 00:21:00.576 "sha256", 00:21:00.576 "sha384", 00:21:00.576 "sha512" 00:21:00.576 ], 00:21:00.576 "dhchap_dhgroups": [ 00:21:00.576 "null", 00:21:00.576 "ffdhe2048", 00:21:00.576 "ffdhe3072", 00:21:00.576 "ffdhe4096", 00:21:00.576 "ffdhe6144", 00:21:00.576 "ffdhe8192" 00:21:00.576 ] 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "bdev_nvme_set_hotplug", 00:21:00.576 "params": { 00:21:00.576 "period_us": 100000, 00:21:00.576 "enable": false 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "bdev_malloc_create", 00:21:00.576 "params": { 00:21:00.576 "name": "malloc0", 00:21:00.576 "num_blocks": 8192, 00:21:00.576 "block_size": 4096, 00:21:00.576 "physical_block_size": 4096, 00:21:00.576 "uuid": "0724022e-3d8b-4889-972b-828d962c1524", 00:21:00.576 "optimal_io_boundary": 0 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "bdev_wait_for_examine" 00:21:00.576 } 00:21:00.576 ] 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "subsystem": "nbd", 00:21:00.576 "config": [] 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "subsystem": "scheduler", 00:21:00.576 "config": [ 00:21:00.576 { 00:21:00.576 "method": "framework_set_scheduler", 00:21:00.576 "params": { 00:21:00.576 "name": "static" 00:21:00.576 } 00:21:00.576 } 00:21:00.576 ] 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "subsystem": "nvmf", 00:21:00.576 "config": [ 00:21:00.576 { 00:21:00.576 "method": "nvmf_set_config", 00:21:00.576 "params": { 00:21:00.576 "discovery_filter": "match_any", 00:21:00.576 "admin_cmd_passthru": { 00:21:00.576 "identify_ctrlr": false 00:21:00.576 } 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_set_max_subsystems", 00:21:00.576 "params": { 00:21:00.576 "max_subsystems": 1024 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_set_crdt", 00:21:00.576 "params": { 00:21:00.576 "crdt1": 0, 00:21:00.576 "crdt2": 0, 00:21:00.576 "crdt3": 0 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_create_transport", 00:21:00.576 "params": { 00:21:00.576 "trtype": "TCP", 00:21:00.576 "max_queue_depth": 128, 00:21:00.576 "max_io_qpairs_per_ctrlr": 127, 00:21:00.576 "in_capsule_data_size": 4096, 00:21:00.576 "max_io_size": 131072, 00:21:00.576 "io_unit_size": 131072, 00:21:00.576 "max_aq_depth": 128, 00:21:00.576 "num_shared_buffers": 511, 00:21:00.576 "buf_cache_size": 4294967295, 00:21:00.576 "dif_insert_or_strip": false, 00:21:00.576 "zcopy": false, 00:21:00.576 "c2h_success": false, 00:21:00.576 "sock_priority": 0, 00:21:00.576 "abort_timeout_sec": 1, 00:21:00.576 "ack_timeout": 0, 00:21:00.576 "data_wr_pool_size": 0 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_create_subsystem", 00:21:00.576 "params": { 00:21:00.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.576 "allow_any_host": false, 00:21:00.576 "serial_number": "SPDK00000000000001", 00:21:00.576 "model_number": "SPDK bdev Controller", 00:21:00.576 "max_namespaces": 10, 00:21:00.576 "min_cntlid": 1, 00:21:00.576 "max_cntlid": 65519, 00:21:00.576 "ana_reporting": false 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_subsystem_add_host", 00:21:00.576 "params": { 00:21:00.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.576 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.576 "psk": "/tmp/tmp.pDBy2MD2Qr" 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_subsystem_add_ns", 00:21:00.576 "params": { 00:21:00.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.576 "namespace": { 00:21:00.576 "nsid": 1, 00:21:00.576 "bdev_name": "malloc0", 00:21:00.576 "nguid": "0724022E3D8B4889972B828D962C1524", 00:21:00.576 "uuid": "0724022e-3d8b-4889-972b-828d962c1524", 00:21:00.576 "no_auto_visible": false 00:21:00.576 } 00:21:00.576 } 00:21:00.576 }, 00:21:00.576 { 00:21:00.576 "method": "nvmf_subsystem_add_listener", 00:21:00.576 "params": { 00:21:00.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.577 "listen_address": { 00:21:00.577 "trtype": "TCP", 00:21:00.577 "adrfam": "IPv4", 00:21:00.577 "traddr": "10.0.0.2", 00:21:00.577 "trsvcid": "4420" 00:21:00.577 }, 00:21:00.577 "secure_channel": true 00:21:00.577 } 00:21:00.577 } 00:21:00.577 ] 00:21:00.577 } 00:21:00.577 ] 00:21:00.577 }' 00:21:00.577 12:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:00.839 12:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:00.839 "subsystems": [ 00:21:00.839 { 00:21:00.839 "subsystem": "keyring", 00:21:00.839 "config": [] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "iobuf", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "iobuf_set_options", 00:21:00.839 "params": { 00:21:00.839 "small_pool_count": 8192, 00:21:00.839 "large_pool_count": 1024, 00:21:00.839 "small_bufsize": 8192, 00:21:00.839 "large_bufsize": 135168 00:21:00.839 } 00:21:00.839 } 00:21:00.839 ] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "sock", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "sock_set_default_impl", 00:21:00.839 "params": { 00:21:00.839 "impl_name": "posix" 00:21:00.839 } 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "method": "sock_impl_set_options", 00:21:00.839 "params": { 00:21:00.839 "impl_name": "ssl", 00:21:00.839 "recv_buf_size": 4096, 00:21:00.839 "send_buf_size": 4096, 00:21:00.839 "enable_recv_pipe": true, 00:21:00.839 "enable_quickack": false, 00:21:00.839 "enable_placement_id": 0, 00:21:00.839 "enable_zerocopy_send_server": true, 00:21:00.839 "enable_zerocopy_send_client": false, 00:21:00.839 "zerocopy_threshold": 0, 00:21:00.839 "tls_version": 0, 00:21:00.839 "enable_ktls": false 00:21:00.839 } 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "method": "sock_impl_set_options", 00:21:00.839 "params": { 00:21:00.839 "impl_name": "posix", 00:21:00.839 "recv_buf_size": 2097152, 00:21:00.839 "send_buf_size": 2097152, 00:21:00.839 "enable_recv_pipe": true, 00:21:00.839 "enable_quickack": false, 00:21:00.839 "enable_placement_id": 0, 00:21:00.839 "enable_zerocopy_send_server": true, 00:21:00.839 "enable_zerocopy_send_client": false, 00:21:00.839 "zerocopy_threshold": 0, 00:21:00.839 "tls_version": 0, 00:21:00.840 "enable_ktls": false 00:21:00.840 } 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "vmd", 00:21:00.840 "config": [] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "accel", 00:21:00.840 "config": [ 00:21:00.840 { 00:21:00.840 "method": "accel_set_options", 00:21:00.840 "params": { 00:21:00.840 "small_cache_size": 128, 00:21:00.840 "large_cache_size": 16, 00:21:00.840 "task_count": 2048, 00:21:00.840 "sequence_count": 2048, 00:21:00.840 "buf_count": 2048 00:21:00.840 } 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "bdev", 00:21:00.840 "config": [ 00:21:00.840 { 00:21:00.840 "method": "bdev_set_options", 00:21:00.840 "params": { 00:21:00.840 "bdev_io_pool_size": 65535, 00:21:00.840 "bdev_io_cache_size": 256, 00:21:00.840 "bdev_auto_examine": true, 00:21:00.840 "iobuf_small_cache_size": 128, 00:21:00.840 "iobuf_large_cache_size": 16 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_raid_set_options", 00:21:00.840 "params": { 00:21:00.840 "process_window_size_kb": 1024 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_iscsi_set_options", 00:21:00.840 "params": { 00:21:00.840 "timeout_sec": 30 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_nvme_set_options", 00:21:00.840 "params": { 00:21:00.840 "action_on_timeout": "none", 00:21:00.840 "timeout_us": 0, 00:21:00.840 "timeout_admin_us": 0, 00:21:00.840 "keep_alive_timeout_ms": 10000, 00:21:00.840 "arbitration_burst": 0, 00:21:00.840 "low_priority_weight": 0, 00:21:00.840 "medium_priority_weight": 0, 00:21:00.840 "high_priority_weight": 0, 00:21:00.840 "nvme_adminq_poll_period_us": 10000, 00:21:00.840 "nvme_ioq_poll_period_us": 0, 00:21:00.840 "io_queue_requests": 512, 00:21:00.840 "delay_cmd_submit": true, 00:21:00.840 "transport_retry_count": 4, 00:21:00.840 "bdev_retry_count": 3, 00:21:00.840 "transport_ack_timeout": 0, 00:21:00.840 "ctrlr_loss_timeout_sec": 0, 00:21:00.840 "reconnect_delay_sec": 0, 00:21:00.840 "fast_io_fail_timeout_sec": 0, 00:21:00.840 "disable_auto_failback": false, 00:21:00.840 "generate_uuids": false, 00:21:00.840 "transport_tos": 0, 00:21:00.840 "nvme_error_stat": false, 00:21:00.840 "rdma_srq_size": 0, 00:21:00.840 "io_path_stat": false, 00:21:00.840 "allow_accel_sequence": false, 00:21:00.840 "rdma_max_cq_size": 0, 00:21:00.840 "rdma_cm_event_timeout_ms": 0, 00:21:00.840 "dhchap_digests": [ 00:21:00.840 "sha256", 00:21:00.840 "sha384", 00:21:00.840 "sha512" 00:21:00.840 ], 00:21:00.840 "dhchap_dhgroups": [ 00:21:00.840 "null", 00:21:00.840 "ffdhe2048", 00:21:00.840 "ffdhe3072", 00:21:00.840 "ffdhe4096", 00:21:00.840 "ffdhe6144", 00:21:00.840 "ffdhe8192" 00:21:00.840 ] 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_nvme_attach_controller", 00:21:00.840 "params": { 00:21:00.840 "name": "TLSTEST", 00:21:00.840 "trtype": "TCP", 00:21:00.840 "adrfam": "IPv4", 00:21:00.840 "traddr": "10.0.0.2", 00:21:00.840 "trsvcid": "4420", 00:21:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.840 "prchk_reftag": false, 00:21:00.840 "prchk_guard": false, 00:21:00.840 "ctrlr_loss_timeout_sec": 0, 00:21:00.840 "reconnect_delay_sec": 0, 00:21:00.840 "fast_io_fail_timeout_sec": 0, 00:21:00.840 "psk": "/tmp/tmp.pDBy2MD2Qr", 00:21:00.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.840 "hdgst": false, 00:21:00.840 "ddgst": false 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_nvme_set_hotplug", 00:21:00.840 "params": { 00:21:00.840 "period_us": 100000, 00:21:00.840 "enable": false 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_wait_for_examine" 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "nbd", 00:21:00.840 "config": [] 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 692452 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 692452 ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 692452 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 692452 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 692452' 00:21:00.840 killing process with pid 692452 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 692452 00:21:00.840 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.840 00:21:00.840 Latency(us) 00:21:00.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.840 =================================================================================================================== 00:21:00.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.840 [2024-06-10 12:25:06.252623] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 692452 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 692090 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 692090 ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 692090 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 692090 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 692090' 00:21:00.840 killing process with pid 692090 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 692090 00:21:00.840 [2024-06-10 12:25:06.416782] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.840 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 692090 00:21:01.143 12:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:01.143 12:25:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.143 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:01.143 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.143 12:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:01.143 "subsystems": [ 00:21:01.143 { 00:21:01.143 "subsystem": "keyring", 00:21:01.143 "config": [] 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "subsystem": "iobuf", 00:21:01.143 "config": [ 00:21:01.143 { 00:21:01.143 "method": "iobuf_set_options", 00:21:01.143 "params": { 00:21:01.143 "small_pool_count": 8192, 00:21:01.143 "large_pool_count": 1024, 00:21:01.143 "small_bufsize": 8192, 00:21:01.143 "large_bufsize": 135168 00:21:01.143 } 00:21:01.143 } 00:21:01.143 ] 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "subsystem": "sock", 00:21:01.143 "config": [ 00:21:01.143 { 00:21:01.143 "method": "sock_set_default_impl", 00:21:01.143 "params": { 00:21:01.143 "impl_name": "posix" 00:21:01.143 } 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "method": "sock_impl_set_options", 00:21:01.143 "params": { 00:21:01.143 "impl_name": "ssl", 00:21:01.143 "recv_buf_size": 4096, 00:21:01.143 "send_buf_size": 4096, 00:21:01.143 "enable_recv_pipe": true, 00:21:01.143 "enable_quickack": false, 00:21:01.143 "enable_placement_id": 0, 00:21:01.143 "enable_zerocopy_send_server": true, 00:21:01.143 "enable_zerocopy_send_client": false, 00:21:01.143 "zerocopy_threshold": 0, 00:21:01.143 "tls_version": 0, 00:21:01.143 "enable_ktls": false 00:21:01.143 } 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "method": "sock_impl_set_options", 00:21:01.143 "params": { 00:21:01.143 "impl_name": "posix", 00:21:01.143 "recv_buf_size": 2097152, 00:21:01.143 "send_buf_size": 2097152, 00:21:01.143 "enable_recv_pipe": true, 00:21:01.143 "enable_quickack": false, 00:21:01.143 "enable_placement_id": 0, 00:21:01.143 "enable_zerocopy_send_server": true, 00:21:01.143 "enable_zerocopy_send_client": false, 00:21:01.143 "zerocopy_threshold": 0, 00:21:01.143 "tls_version": 0, 00:21:01.143 "enable_ktls": false 00:21:01.143 } 00:21:01.143 } 00:21:01.143 ] 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "subsystem": "vmd", 00:21:01.143 "config": [] 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "subsystem": "accel", 00:21:01.143 "config": [ 00:21:01.143 { 00:21:01.143 "method": "accel_set_options", 00:21:01.143 "params": { 00:21:01.143 "small_cache_size": 128, 00:21:01.143 "large_cache_size": 16, 00:21:01.143 "task_count": 2048, 00:21:01.143 "sequence_count": 2048, 00:21:01.143 "buf_count": 2048 00:21:01.143 } 00:21:01.143 } 00:21:01.143 ] 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "subsystem": "bdev", 00:21:01.143 "config": [ 00:21:01.143 { 00:21:01.143 "method": "bdev_set_options", 00:21:01.143 "params": { 00:21:01.143 "bdev_io_pool_size": 65535, 00:21:01.143 "bdev_io_cache_size": 256, 00:21:01.143 "bdev_auto_examine": true, 00:21:01.143 "iobuf_small_cache_size": 128, 00:21:01.143 "iobuf_large_cache_size": 16 00:21:01.143 } 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "method": "bdev_raid_set_options", 00:21:01.143 "params": { 00:21:01.143 "process_window_size_kb": 1024 00:21:01.143 } 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "method": "bdev_iscsi_set_options", 00:21:01.143 "params": { 00:21:01.143 "timeout_sec": 30 00:21:01.143 } 00:21:01.143 }, 00:21:01.143 { 00:21:01.143 "method": "bdev_nvme_set_options", 00:21:01.143 "params": { 00:21:01.143 "action_on_timeout": "none", 00:21:01.143 "timeout_us": 0, 00:21:01.144 "timeout_admin_us": 0, 00:21:01.144 "keep_alive_timeout_ms": 10000, 00:21:01.144 "arbitration_burst": 0, 00:21:01.144 "low_priority_weight": 0, 00:21:01.144 "medium_priority_weight": 0, 00:21:01.144 "high_priority_weight": 0, 00:21:01.144 "nvme_adminq_poll_period_us": 10000, 00:21:01.144 "nvme_ioq_poll_period_us": 0, 00:21:01.144 "io_queue_requests": 0, 00:21:01.144 "delay_cmd_submit": true, 00:21:01.144 "transport_retry_count": 4, 00:21:01.144 "bdev_retry_count": 3, 00:21:01.144 "transport_ack_timeout": 0, 00:21:01.144 "ctrlr_loss_timeout_sec": 0, 00:21:01.144 "reconnect_delay_sec": 0, 00:21:01.144 "fast_io_fail_timeout_sec": 0, 00:21:01.144 "disable_auto_failback": false, 00:21:01.144 "generate_uuids": false, 00:21:01.144 "transport_tos": 0, 00:21:01.144 "nvme_error_stat": false, 00:21:01.144 "rdma_srq_size": 0, 00:21:01.144 "io_path_stat": false, 00:21:01.144 "allow_accel_sequence": false, 00:21:01.144 "rdma_max_cq_size": 0, 00:21:01.144 "rdma_cm_event_timeout_ms": 0, 00:21:01.144 "dhchap_digests": [ 00:21:01.144 "sha256", 00:21:01.144 "sha384", 00:21:01.144 "sha512" 00:21:01.144 ], 00:21:01.144 "dhchap_dhgroups": [ 00:21:01.144 "null", 00:21:01.144 "ffdhe2048", 00:21:01.144 "ffdhe3072", 00:21:01.144 "ffdhe4096", 00:21:01.144 "ffdhe6144", 00:21:01.144 "ffdhe8192" 00:21:01.144 ] 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "bdev_nvme_set_hotplug", 00:21:01.144 "params": { 00:21:01.144 "period_us": 100000, 00:21:01.144 "enable": false 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "bdev_malloc_create", 00:21:01.144 "params": { 00:21:01.144 "name": "malloc0", 00:21:01.144 "num_blocks": 8192, 00:21:01.144 "block_size": 4096, 00:21:01.144 "physical_block_size": 4096, 00:21:01.144 "uuid": "0724022e-3d8b-4889-972b-828d962c1524", 00:21:01.144 "optimal_io_boundary": 0 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "bdev_wait_for_examine" 00:21:01.144 } 00:21:01.144 ] 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "subsystem": "nbd", 00:21:01.144 "config": [] 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "subsystem": "scheduler", 00:21:01.144 "config": [ 00:21:01.144 { 00:21:01.144 "method": "framework_set_scheduler", 00:21:01.144 "params": { 00:21:01.144 "name": "static" 00:21:01.144 } 00:21:01.144 } 00:21:01.144 ] 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "subsystem": "nvmf", 00:21:01.144 "config": [ 00:21:01.144 { 00:21:01.144 "method": "nvmf_set_config", 00:21:01.144 "params": { 00:21:01.144 "discovery_filter": "match_any", 00:21:01.144 "admin_cmd_passthru": { 00:21:01.144 "identify_ctrlr": false 00:21:01.144 } 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_set_max_subsystems", 00:21:01.144 "params": { 00:21:01.144 "max_subsystems": 1024 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_set_crdt", 00:21:01.144 "params": { 00:21:01.144 "crdt1": 0, 00:21:01.144 "crdt2": 0, 00:21:01.144 "crdt3": 0 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_create_transport", 00:21:01.144 "params": { 00:21:01.144 "trtype": "TCP", 00:21:01.144 "max_queue_depth": 128, 00:21:01.144 "max_io_qpairs_per_ctrlr": 127, 00:21:01.144 "in_capsule_data_size": 4096, 00:21:01.144 "max_io_size": 131072, 00:21:01.144 "io_unit_size": 131072, 00:21:01.144 "max_aq_depth": 128, 00:21:01.144 "num_shared_buffers": 511, 00:21:01.144 "buf_cache_size": 4294967295, 00:21:01.144 "dif_insert_or_strip": false, 00:21:01.144 "zcopy": false, 00:21:01.144 "c2h_success": false, 00:21:01.144 "sock_priority": 0, 00:21:01.144 "abort_timeout_sec": 1, 00:21:01.144 "ack_timeout": 0, 00:21:01.144 "data_wr_pool_size": 0 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_create_subsystem", 00:21:01.144 "params": { 00:21:01.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.144 "allow_any_host": false, 00:21:01.144 "serial_number": "SPDK00000000000001", 00:21:01.144 "model_number": "SPDK bdev Controller", 00:21:01.144 "max_namespaces": 10, 00:21:01.144 "min_cntlid": 1, 00:21:01.144 "max_cntlid": 65519, 00:21:01.144 "ana_reporting": false 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_subsystem_add_host", 00:21:01.144 "params": { 00:21:01.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.144 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.144 "psk": "/tmp/tmp.pDBy2MD2Qr" 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_subsystem_add_ns", 00:21:01.144 "params": { 00:21:01.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.144 "namespace": { 00:21:01.144 "nsid": 1, 00:21:01.144 "bdev_name": "malloc0", 00:21:01.144 "nguid": "0724022E3D8B4889972B828D962C1524", 00:21:01.144 "uuid": "0724022e-3d8b-4889-972b-828d962c1524", 00:21:01.144 "no_auto_visible": false 00:21:01.144 } 00:21:01.144 } 00:21:01.144 }, 00:21:01.144 { 00:21:01.144 "method": "nvmf_subsystem_add_listener", 00:21:01.144 "params": { 00:21:01.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.144 "listen_address": { 00:21:01.144 "trtype": "TCP", 00:21:01.144 "adrfam": "IPv4", 00:21:01.144 "traddr": "10.0.0.2", 00:21:01.144 "trsvcid": "4420" 00:21:01.144 }, 00:21:01.144 "secure_channel": true 00:21:01.144 } 00:21:01.144 } 00:21:01.144 ] 00:21:01.144 } 00:21:01.144 ] 00:21:01.144 }' 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=692805 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 692805 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 692805 ']' 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.144 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:01.145 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.145 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:01.145 12:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.145 [2024-06-10 12:25:06.594931] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:01.145 [2024-06-10 12:25:06.594981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.145 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.145 [2024-06-10 12:25:06.681573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.145 [2024-06-10 12:25:06.734208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.145 [2024-06-10 12:25:06.734241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.145 [2024-06-10 12:25:06.734247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.145 [2024-06-10 12:25:06.734252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.145 [2024-06-10 12:25:06.734256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.145 [2024-06-10 12:25:06.734306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.406 [2024-06-10 12:25:06.916941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.406 [2024-06-10 12:25:06.932905] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:01.406 [2024-06-10 12:25:06.948952] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.406 [2024-06-10 12:25:06.966352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=693155 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 693155 /var/tmp/bdevperf.sock 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 693155 ']' 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.979 12:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:01.979 "subsystems": [ 00:21:01.979 { 00:21:01.979 "subsystem": "keyring", 00:21:01.979 "config": [] 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "subsystem": "iobuf", 00:21:01.979 "config": [ 00:21:01.979 { 00:21:01.979 "method": "iobuf_set_options", 00:21:01.979 "params": { 00:21:01.979 "small_pool_count": 8192, 00:21:01.979 "large_pool_count": 1024, 00:21:01.979 "small_bufsize": 8192, 00:21:01.979 "large_bufsize": 135168 00:21:01.979 } 00:21:01.979 } 00:21:01.979 ] 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "subsystem": "sock", 00:21:01.979 "config": [ 00:21:01.979 { 00:21:01.979 "method": "sock_set_default_impl", 00:21:01.979 "params": { 00:21:01.979 "impl_name": "posix" 00:21:01.979 } 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "method": "sock_impl_set_options", 00:21:01.979 "params": { 00:21:01.979 "impl_name": "ssl", 00:21:01.979 "recv_buf_size": 4096, 00:21:01.979 "send_buf_size": 4096, 00:21:01.979 "enable_recv_pipe": true, 00:21:01.979 "enable_quickack": false, 00:21:01.979 "enable_placement_id": 0, 00:21:01.979 "enable_zerocopy_send_server": true, 00:21:01.979 "enable_zerocopy_send_client": false, 00:21:01.979 "zerocopy_threshold": 0, 00:21:01.979 "tls_version": 0, 00:21:01.979 "enable_ktls": false 00:21:01.979 } 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "method": "sock_impl_set_options", 00:21:01.979 "params": { 00:21:01.979 "impl_name": "posix", 00:21:01.979 "recv_buf_size": 2097152, 00:21:01.979 "send_buf_size": 2097152, 00:21:01.979 "enable_recv_pipe": true, 00:21:01.979 "enable_quickack": false, 00:21:01.979 "enable_placement_id": 0, 00:21:01.979 "enable_zerocopy_send_server": true, 00:21:01.979 "enable_zerocopy_send_client": false, 00:21:01.979 "zerocopy_threshold": 0, 00:21:01.979 "tls_version": 0, 00:21:01.979 "enable_ktls": false 00:21:01.979 } 00:21:01.979 } 00:21:01.979 ] 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "subsystem": "vmd", 00:21:01.979 "config": [] 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "subsystem": "accel", 00:21:01.979 "config": [ 00:21:01.979 { 00:21:01.979 "method": "accel_set_options", 00:21:01.979 "params": { 00:21:01.979 "small_cache_size": 128, 00:21:01.979 "large_cache_size": 16, 00:21:01.979 "task_count": 2048, 00:21:01.979 "sequence_count": 2048, 00:21:01.979 "buf_count": 2048 00:21:01.979 } 00:21:01.979 } 00:21:01.979 ] 00:21:01.979 }, 00:21:01.979 { 00:21:01.979 "subsystem": "bdev", 00:21:01.979 "config": [ 00:21:01.979 { 00:21:01.979 "method": "bdev_set_options", 00:21:01.979 "params": { 00:21:01.979 "bdev_io_pool_size": 65535, 00:21:01.979 "bdev_io_cache_size": 256, 00:21:01.979 "bdev_auto_examine": true, 00:21:01.979 "iobuf_small_cache_size": 128, 00:21:01.979 "iobuf_large_cache_size": 16 00:21:01.979 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_raid_set_options", 00:21:01.980 "params": { 00:21:01.980 "process_window_size_kb": 1024 00:21:01.980 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_iscsi_set_options", 00:21:01.980 "params": { 00:21:01.980 "timeout_sec": 30 00:21:01.980 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_nvme_set_options", 00:21:01.980 "params": { 00:21:01.980 "action_on_timeout": "none", 00:21:01.980 "timeout_us": 0, 00:21:01.980 "timeout_admin_us": 0, 00:21:01.980 "keep_alive_timeout_ms": 10000, 00:21:01.980 "arbitration_burst": 0, 00:21:01.980 "low_priority_weight": 0, 00:21:01.980 "medium_priority_weight": 0, 00:21:01.980 "high_priority_weight": 0, 00:21:01.980 "nvme_adminq_poll_period_us": 10000, 00:21:01.980 "nvme_ioq_poll_period_us": 0, 00:21:01.980 "io_queue_requests": 512, 00:21:01.980 "delay_cmd_submit": true, 00:21:01.980 "transport_retry_count": 4, 00:21:01.980 "bdev_retry_count": 3, 00:21:01.980 "transport_ack_timeout": 0, 00:21:01.980 "ctrlr_loss_timeout_sec": 0, 00:21:01.980 "reconnect_delay_sec": 0, 00:21:01.980 "fast_io_fail_timeout_sec": 0, 00:21:01.980 "disable_auto_failback": false, 00:21:01.980 "generate_uuids": false, 00:21:01.980 "transport_tos": 0, 00:21:01.980 "nvme_error_stat": false, 00:21:01.980 "rdma_srq_size": 0, 00:21:01.980 "io_path_stat": false, 00:21:01.980 "allow_accel_sequence": false, 00:21:01.980 "rdma_max_cq_size": 0, 00:21:01.980 "rdma_cm_event_timeout_ms": 0, 00:21:01.980 "dhchap_digests": [ 00:21:01.980 "sha256", 00:21:01.980 "sha384", 00:21:01.980 "sha512" 00:21:01.980 ], 00:21:01.980 "dhchap_dhgroups": [ 00:21:01.980 "null", 00:21:01.980 "ffdhe2048", 00:21:01.980 "ffdhe3072", 00:21:01.980 "ffdhe4096", 00:21:01.980 "ffdhe6144", 00:21:01.980 "ffdhe8192" 00:21:01.980 ] 00:21:01.980 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_nvme_attach_controller", 00:21:01.980 "params": { 00:21:01.980 "name": "TLSTEST", 00:21:01.980 "trtype": "TCP", 00:21:01.980 "adrfam": "IPv4", 00:21:01.980 "traddr": "10.0.0.2", 00:21:01.980 "trsvcid": "4420", 00:21:01.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.980 "prchk_reftag": false, 00:21:01.980 "prchk_guard": false, 00:21:01.980 "ctrlr_loss_timeout_sec": 0, 00:21:01.980 "reconnect_delay_sec": 0, 00:21:01.980 "fast_io_fail_timeout_sec": 0, 00:21:01.980 "psk": "/tmp/tmp.pDBy2MD2Qr", 00:21:01.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.980 "hdgst": false, 00:21:01.980 "ddgst": false 00:21:01.980 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_nvme_set_hotplug", 00:21:01.980 "params": { 00:21:01.980 "period_us": 100000, 00:21:01.980 "enable": false 00:21:01.980 } 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "method": "bdev_wait_for_examine" 00:21:01.980 } 00:21:01.980 ] 00:21:01.980 }, 00:21:01.980 { 00:21:01.980 "subsystem": "nbd", 00:21:01.980 "config": [] 00:21:01.980 } 00:21:01.980 ] 00:21:01.980 }' 00:21:01.980 [2024-06-10 12:25:07.447753] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:01.980 [2024-06-10 12:25:07.447821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693155 ] 00:21:01.980 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.980 [2024-06-10 12:25:07.503811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.980 [2024-06-10 12:25:07.555988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.242 [2024-06-10 12:25:07.679737] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.242 [2024-06-10 12:25:07.679801] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.815 12:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:02.815 12:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:02.815 12:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.815 Running I/O for 10 seconds... 00:21:12.825 00:21:12.825 Latency(us) 00:21:12.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.825 Verification LBA range: start 0x0 length 0x2000 00:21:12.825 TLSTESTn1 : 10.01 6002.60 23.45 0.00 0.00 21293.65 5461.33 69468.16 00:21:12.825 =================================================================================================================== 00:21:12.825 Total : 6002.60 23.45 0.00 0.00 21293.65 5461.33 69468.16 00:21:12.825 0 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 693155 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 693155 ']' 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 693155 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:12.825 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 693155 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 693155' 00:21:13.086 killing process with pid 693155 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 693155 00:21:13.086 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.086 00:21:13.086 Latency(us) 00:21:13.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.086 =================================================================================================================== 00:21:13.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.086 [2024-06-10 12:25:18.443993] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 693155 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 692805 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 692805 ']' 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 692805 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 692805 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 692805' 00:21:13.086 killing process with pid 692805 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 692805 00:21:13.086 [2024-06-10 12:25:18.612622] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:13.086 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 692805 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=695200 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 695200 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 695200 ']' 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:13.347 12:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.347 [2024-06-10 12:25:18.792126] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:13.347 [2024-06-10 12:25:18.792177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.347 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.347 [2024-06-10 12:25:18.864910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.347 [2024-06-10 12:25:18.928472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.347 [2024-06-10 12:25:18.928509] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.347 [2024-06-10 12:25:18.928517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.347 [2024-06-10 12:25:18.928523] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.347 [2024-06-10 12:25:18.928528] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.347 [2024-06-10 12:25:18.928550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.pDBy2MD2Qr 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pDBy2MD2Qr 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:14.297 [2024-06-10 12:25:19.747176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.297 12:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:14.558 12:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:14.558 [2024-06-10 12:25:20.067967] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.558 [2024-06-10 12:25:20.068179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.558 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.818 malloc0 00:21:14.818 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pDBy2MD2Qr 00:21:15.080 [2024-06-10 12:25:20.579805] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=695646 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 695646 /var/tmp/bdevperf.sock 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 695646 ']' 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:15.080 12:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.080 [2024-06-10 12:25:20.658446] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:15.080 [2024-06-10 12:25:20.658497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695646 ] 00:21:15.341 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.341 [2024-06-10 12:25:20.740531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.341 [2024-06-10 12:25:20.794291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.912 12:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:15.912 12:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:15.912 12:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pDBy2MD2Qr 00:21:16.172 12:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:16.172 [2024-06-10 12:25:21.699699] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.172 nvme0n1 00:21:16.432 12:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.432 Running I/O for 1 seconds... 00:21:17.372 00:21:17.372 Latency(us) 00:21:17.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.372 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.372 Verification LBA range: start 0x0 length 0x2000 00:21:17.372 nvme0n1 : 1.02 6154.66 24.04 0.00 0.00 20626.15 5816.32 33423.36 00:21:17.372 =================================================================================================================== 00:21:17.372 Total : 6154.66 24.04 0.00 0.00 20626.15 5816.32 33423.36 00:21:17.372 0 00:21:17.372 12:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 695646 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 695646 ']' 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 695646 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 695646 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 695646' 00:21:17.373 killing process with pid 695646 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 695646 00:21:17.373 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.373 00:21:17.373 Latency(us) 00:21:17.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.373 =================================================================================================================== 00:21:17.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.373 12:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 695646 00:21:17.632 12:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 695200 00:21:17.632 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 695200 ']' 00:21:17.632 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 695200 00:21:17.632 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:17.632 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 695200 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 695200' 00:21:17.633 killing process with pid 695200 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 695200 00:21:17.633 [2024-06-10 12:25:23.108786] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:17.633 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 695200 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=696215 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 696215 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 696215 ']' 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:17.893 12:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.893 [2024-06-10 12:25:23.306887] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:17.893 [2024-06-10 12:25:23.306943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.893 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.893 [2024-06-10 12:25:23.378125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.893 [2024-06-10 12:25:23.443619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.893 [2024-06-10 12:25:23.443657] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.893 [2024-06-10 12:25:23.443665] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.893 [2024-06-10 12:25:23.443671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.893 [2024-06-10 12:25:23.443677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.893 [2024-06-10 12:25:23.443695] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.464 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:18.464 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:18.464 12:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.464 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:18.464 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.724 12:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.724 12:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:18.724 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.724 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.724 [2024-06-10 12:25:24.109906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.724 malloc0 00:21:18.724 [2024-06-10 12:25:24.136634] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.725 [2024-06-10 12:25:24.136833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=696396 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 696396 /var/tmp/bdevperf.sock 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 696396 ']' 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:18.725 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.725 [2024-06-10 12:25:24.220047] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:18.725 [2024-06-10 12:25:24.220095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696396 ] 00:21:18.725 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.725 [2024-06-10 12:25:24.299360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.985 [2024-06-10 12:25:24.353124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.556 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:19.556 12:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:19.556 12:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pDBy2MD2Qr 00:21:19.556 12:25:25 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:19.816 [2024-06-10 12:25:25.250405] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.816 nvme0n1 00:21:19.816 12:25:25 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.077 Running I/O for 1 seconds... 00:21:21.020 00:21:21.020 Latency(us) 00:21:21.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.020 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.020 Verification LBA range: start 0x0 length 0x2000 00:21:21.020 nvme0n1 : 1.02 4641.48 18.13 0.00 0.00 27371.12 5843.63 87381.33 00:21:21.020 =================================================================================================================== 00:21:21.020 Total : 4641.48 18.13 0.00 0.00 27371.12 5843.63 87381.33 00:21:21.020 0 00:21:21.020 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:21.020 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.020 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.020 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.020 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:21.020 "subsystems": [ 00:21:21.020 { 00:21:21.020 "subsystem": "keyring", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "keyring_file_add_key", 00:21:21.020 "params": { 00:21:21.020 "name": "key0", 00:21:21.020 "path": "/tmp/tmp.pDBy2MD2Qr" 00:21:21.020 } 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "iobuf", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "iobuf_set_options", 00:21:21.020 "params": { 00:21:21.020 "small_pool_count": 8192, 00:21:21.020 "large_pool_count": 1024, 00:21:21.020 "small_bufsize": 8192, 00:21:21.020 "large_bufsize": 135168 00:21:21.020 } 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "sock", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "sock_set_default_impl", 00:21:21.020 "params": { 00:21:21.020 "impl_name": "posix" 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "sock_impl_set_options", 00:21:21.020 "params": { 00:21:21.020 "impl_name": "ssl", 00:21:21.020 "recv_buf_size": 4096, 00:21:21.020 "send_buf_size": 4096, 00:21:21.020 "enable_recv_pipe": true, 00:21:21.020 "enable_quickack": false, 00:21:21.020 "enable_placement_id": 0, 00:21:21.020 "enable_zerocopy_send_server": true, 00:21:21.020 "enable_zerocopy_send_client": false, 00:21:21.020 "zerocopy_threshold": 0, 00:21:21.020 "tls_version": 0, 00:21:21.020 "enable_ktls": false 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "sock_impl_set_options", 00:21:21.020 "params": { 00:21:21.020 "impl_name": "posix", 00:21:21.020 "recv_buf_size": 2097152, 00:21:21.020 "send_buf_size": 2097152, 00:21:21.020 "enable_recv_pipe": true, 00:21:21.020 "enable_quickack": false, 00:21:21.020 "enable_placement_id": 0, 00:21:21.020 "enable_zerocopy_send_server": true, 00:21:21.020 "enable_zerocopy_send_client": false, 00:21:21.020 "zerocopy_threshold": 0, 00:21:21.020 "tls_version": 0, 00:21:21.020 "enable_ktls": false 00:21:21.020 } 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "vmd", 00:21:21.020 "config": [] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "accel", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "accel_set_options", 00:21:21.020 "params": { 00:21:21.020 "small_cache_size": 128, 00:21:21.020 "large_cache_size": 16, 00:21:21.020 "task_count": 2048, 00:21:21.020 "sequence_count": 2048, 00:21:21.020 "buf_count": 2048 00:21:21.020 } 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "bdev", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "bdev_set_options", 00:21:21.020 "params": { 00:21:21.020 "bdev_io_pool_size": 65535, 00:21:21.020 "bdev_io_cache_size": 256, 00:21:21.020 "bdev_auto_examine": true, 00:21:21.020 "iobuf_small_cache_size": 128, 00:21:21.020 "iobuf_large_cache_size": 16 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_raid_set_options", 00:21:21.020 "params": { 00:21:21.020 "process_window_size_kb": 1024 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_iscsi_set_options", 00:21:21.020 "params": { 00:21:21.020 "timeout_sec": 30 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_nvme_set_options", 00:21:21.020 "params": { 00:21:21.020 "action_on_timeout": "none", 00:21:21.020 "timeout_us": 0, 00:21:21.020 "timeout_admin_us": 0, 00:21:21.020 "keep_alive_timeout_ms": 10000, 00:21:21.020 "arbitration_burst": 0, 00:21:21.020 "low_priority_weight": 0, 00:21:21.020 "medium_priority_weight": 0, 00:21:21.020 "high_priority_weight": 0, 00:21:21.020 "nvme_adminq_poll_period_us": 10000, 00:21:21.020 "nvme_ioq_poll_period_us": 0, 00:21:21.020 "io_queue_requests": 0, 00:21:21.020 "delay_cmd_submit": true, 00:21:21.020 "transport_retry_count": 4, 00:21:21.020 "bdev_retry_count": 3, 00:21:21.020 "transport_ack_timeout": 0, 00:21:21.020 "ctrlr_loss_timeout_sec": 0, 00:21:21.020 "reconnect_delay_sec": 0, 00:21:21.020 "fast_io_fail_timeout_sec": 0, 00:21:21.020 "disable_auto_failback": false, 00:21:21.020 "generate_uuids": false, 00:21:21.020 "transport_tos": 0, 00:21:21.020 "nvme_error_stat": false, 00:21:21.020 "rdma_srq_size": 0, 00:21:21.020 "io_path_stat": false, 00:21:21.020 "allow_accel_sequence": false, 00:21:21.020 "rdma_max_cq_size": 0, 00:21:21.020 "rdma_cm_event_timeout_ms": 0, 00:21:21.020 "dhchap_digests": [ 00:21:21.020 "sha256", 00:21:21.020 "sha384", 00:21:21.020 "sha512" 00:21:21.020 ], 00:21:21.020 "dhchap_dhgroups": [ 00:21:21.020 "null", 00:21:21.020 "ffdhe2048", 00:21:21.020 "ffdhe3072", 00:21:21.020 "ffdhe4096", 00:21:21.020 "ffdhe6144", 00:21:21.020 "ffdhe8192" 00:21:21.020 ] 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_nvme_set_hotplug", 00:21:21.020 "params": { 00:21:21.020 "period_us": 100000, 00:21:21.020 "enable": false 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_malloc_create", 00:21:21.020 "params": { 00:21:21.020 "name": "malloc0", 00:21:21.020 "num_blocks": 8192, 00:21:21.020 "block_size": 4096, 00:21:21.020 "physical_block_size": 4096, 00:21:21.020 "uuid": "471a406b-24a3-4360-8135-3ddfa2647f88", 00:21:21.020 "optimal_io_boundary": 0 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "bdev_wait_for_examine" 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "nbd", 00:21:21.020 "config": [] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "scheduler", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "framework_set_scheduler", 00:21:21.020 "params": { 00:21:21.020 "name": "static" 00:21:21.020 } 00:21:21.020 } 00:21:21.020 ] 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "subsystem": "nvmf", 00:21:21.020 "config": [ 00:21:21.020 { 00:21:21.020 "method": "nvmf_set_config", 00:21:21.020 "params": { 00:21:21.020 "discovery_filter": "match_any", 00:21:21.020 "admin_cmd_passthru": { 00:21:21.020 "identify_ctrlr": false 00:21:21.020 } 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "nvmf_set_max_subsystems", 00:21:21.020 "params": { 00:21:21.020 "max_subsystems": 1024 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.020 "method": "nvmf_set_crdt", 00:21:21.020 "params": { 00:21:21.020 "crdt1": 0, 00:21:21.020 "crdt2": 0, 00:21:21.020 "crdt3": 0 00:21:21.020 } 00:21:21.020 }, 00:21:21.020 { 00:21:21.021 "method": "nvmf_create_transport", 00:21:21.021 "params": { 00:21:21.021 "trtype": "TCP", 00:21:21.021 "max_queue_depth": 128, 00:21:21.021 "max_io_qpairs_per_ctrlr": 127, 00:21:21.021 "in_capsule_data_size": 4096, 00:21:21.021 "max_io_size": 131072, 00:21:21.021 "io_unit_size": 131072, 00:21:21.021 "max_aq_depth": 128, 00:21:21.021 "num_shared_buffers": 511, 00:21:21.021 "buf_cache_size": 4294967295, 00:21:21.021 "dif_insert_or_strip": false, 00:21:21.021 "zcopy": false, 00:21:21.021 "c2h_success": false, 00:21:21.021 "sock_priority": 0, 00:21:21.021 "abort_timeout_sec": 1, 00:21:21.021 "ack_timeout": 0, 00:21:21.021 "data_wr_pool_size": 0 00:21:21.021 } 00:21:21.021 }, 00:21:21.021 { 00:21:21.021 "method": "nvmf_create_subsystem", 00:21:21.021 "params": { 00:21:21.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.021 "allow_any_host": false, 00:21:21.021 "serial_number": "00000000000000000000", 00:21:21.021 "model_number": "SPDK bdev Controller", 00:21:21.021 "max_namespaces": 32, 00:21:21.021 "min_cntlid": 1, 00:21:21.021 "max_cntlid": 65519, 00:21:21.021 "ana_reporting": false 00:21:21.021 } 00:21:21.021 }, 00:21:21.021 { 00:21:21.021 "method": "nvmf_subsystem_add_host", 00:21:21.021 "params": { 00:21:21.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.021 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.021 "psk": "key0" 00:21:21.021 } 00:21:21.021 }, 00:21:21.021 { 00:21:21.021 "method": "nvmf_subsystem_add_ns", 00:21:21.021 "params": { 00:21:21.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.021 "namespace": { 00:21:21.021 "nsid": 1, 00:21:21.021 "bdev_name": "malloc0", 00:21:21.021 "nguid": "471A406B24A3436081353DDFA2647F88", 00:21:21.021 "uuid": "471a406b-24a3-4360-8135-3ddfa2647f88", 00:21:21.021 "no_auto_visible": false 00:21:21.021 } 00:21:21.021 } 00:21:21.021 }, 00:21:21.021 { 00:21:21.021 "method": "nvmf_subsystem_add_listener", 00:21:21.021 "params": { 00:21:21.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.021 "listen_address": { 00:21:21.021 "trtype": "TCP", 00:21:21.021 "adrfam": "IPv4", 00:21:21.021 "traddr": "10.0.0.2", 00:21:21.021 "trsvcid": "4420" 00:21:21.021 }, 00:21:21.021 "secure_channel": true 00:21:21.021 } 00:21:21.021 } 00:21:21.021 ] 00:21:21.021 } 00:21:21.021 ] 00:21:21.021 }' 00:21:21.021 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:21.282 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:21.282 "subsystems": [ 00:21:21.282 { 00:21:21.282 "subsystem": "keyring", 00:21:21.282 "config": [ 00:21:21.282 { 00:21:21.282 "method": "keyring_file_add_key", 00:21:21.282 "params": { 00:21:21.282 "name": "key0", 00:21:21.282 "path": "/tmp/tmp.pDBy2MD2Qr" 00:21:21.282 } 00:21:21.282 } 00:21:21.282 ] 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "subsystem": "iobuf", 00:21:21.282 "config": [ 00:21:21.282 { 00:21:21.282 "method": "iobuf_set_options", 00:21:21.282 "params": { 00:21:21.282 "small_pool_count": 8192, 00:21:21.282 "large_pool_count": 1024, 00:21:21.282 "small_bufsize": 8192, 00:21:21.282 "large_bufsize": 135168 00:21:21.282 } 00:21:21.282 } 00:21:21.282 ] 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "subsystem": "sock", 00:21:21.282 "config": [ 00:21:21.282 { 00:21:21.282 "method": "sock_set_default_impl", 00:21:21.282 "params": { 00:21:21.282 "impl_name": "posix" 00:21:21.282 } 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "method": "sock_impl_set_options", 00:21:21.282 "params": { 00:21:21.282 "impl_name": "ssl", 00:21:21.282 "recv_buf_size": 4096, 00:21:21.282 "send_buf_size": 4096, 00:21:21.282 "enable_recv_pipe": true, 00:21:21.282 "enable_quickack": false, 00:21:21.282 "enable_placement_id": 0, 00:21:21.282 "enable_zerocopy_send_server": true, 00:21:21.282 "enable_zerocopy_send_client": false, 00:21:21.282 "zerocopy_threshold": 0, 00:21:21.282 "tls_version": 0, 00:21:21.282 "enable_ktls": false 00:21:21.282 } 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "method": "sock_impl_set_options", 00:21:21.282 "params": { 00:21:21.282 "impl_name": "posix", 00:21:21.282 "recv_buf_size": 2097152, 00:21:21.282 "send_buf_size": 2097152, 00:21:21.282 "enable_recv_pipe": true, 00:21:21.282 "enable_quickack": false, 00:21:21.282 "enable_placement_id": 0, 00:21:21.282 "enable_zerocopy_send_server": true, 00:21:21.282 "enable_zerocopy_send_client": false, 00:21:21.282 "zerocopy_threshold": 0, 00:21:21.282 "tls_version": 0, 00:21:21.282 "enable_ktls": false 00:21:21.282 } 00:21:21.282 } 00:21:21.282 ] 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "subsystem": "vmd", 00:21:21.282 "config": [] 00:21:21.282 }, 00:21:21.282 { 00:21:21.282 "subsystem": "accel", 00:21:21.282 "config": [ 00:21:21.282 { 00:21:21.282 "method": "accel_set_options", 00:21:21.282 "params": { 00:21:21.282 "small_cache_size": 128, 00:21:21.282 "large_cache_size": 16, 00:21:21.282 "task_count": 2048, 00:21:21.282 "sequence_count": 2048, 00:21:21.282 "buf_count": 2048 00:21:21.282 } 00:21:21.282 } 00:21:21.282 ] 00:21:21.282 }, 00:21:21.282 { 00:21:21.283 "subsystem": "bdev", 00:21:21.283 "config": [ 00:21:21.283 { 00:21:21.283 "method": "bdev_set_options", 00:21:21.283 "params": { 00:21:21.283 "bdev_io_pool_size": 65535, 00:21:21.283 "bdev_io_cache_size": 256, 00:21:21.283 "bdev_auto_examine": true, 00:21:21.283 "iobuf_small_cache_size": 128, 00:21:21.283 "iobuf_large_cache_size": 16 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_raid_set_options", 00:21:21.283 "params": { 00:21:21.283 "process_window_size_kb": 1024 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_iscsi_set_options", 00:21:21.283 "params": { 00:21:21.283 "timeout_sec": 30 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_nvme_set_options", 00:21:21.283 "params": { 00:21:21.283 "action_on_timeout": "none", 00:21:21.283 "timeout_us": 0, 00:21:21.283 "timeout_admin_us": 0, 00:21:21.283 "keep_alive_timeout_ms": 10000, 00:21:21.283 "arbitration_burst": 0, 00:21:21.283 "low_priority_weight": 0, 00:21:21.283 "medium_priority_weight": 0, 00:21:21.283 "high_priority_weight": 0, 00:21:21.283 "nvme_adminq_poll_period_us": 10000, 00:21:21.283 "nvme_ioq_poll_period_us": 0, 00:21:21.283 "io_queue_requests": 512, 00:21:21.283 "delay_cmd_submit": true, 00:21:21.283 "transport_retry_count": 4, 00:21:21.283 "bdev_retry_count": 3, 00:21:21.283 "transport_ack_timeout": 0, 00:21:21.283 "ctrlr_loss_timeout_sec": 0, 00:21:21.283 "reconnect_delay_sec": 0, 00:21:21.283 "fast_io_fail_timeout_sec": 0, 00:21:21.283 "disable_auto_failback": false, 00:21:21.283 "generate_uuids": false, 00:21:21.283 "transport_tos": 0, 00:21:21.283 "nvme_error_stat": false, 00:21:21.283 "rdma_srq_size": 0, 00:21:21.283 "io_path_stat": false, 00:21:21.283 "allow_accel_sequence": false, 00:21:21.283 "rdma_max_cq_size": 0, 00:21:21.283 "rdma_cm_event_timeout_ms": 0, 00:21:21.283 "dhchap_digests": [ 00:21:21.283 "sha256", 00:21:21.283 "sha384", 00:21:21.283 "sha512" 00:21:21.283 ], 00:21:21.283 "dhchap_dhgroups": [ 00:21:21.283 "null", 00:21:21.283 "ffdhe2048", 00:21:21.283 "ffdhe3072", 00:21:21.283 "ffdhe4096", 00:21:21.283 "ffdhe6144", 00:21:21.283 "ffdhe8192" 00:21:21.283 ] 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_nvme_attach_controller", 00:21:21.283 "params": { 00:21:21.283 "name": "nvme0", 00:21:21.283 "trtype": "TCP", 00:21:21.283 "adrfam": "IPv4", 00:21:21.283 "traddr": "10.0.0.2", 00:21:21.283 "trsvcid": "4420", 00:21:21.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.283 "prchk_reftag": false, 00:21:21.283 "prchk_guard": false, 00:21:21.283 "ctrlr_loss_timeout_sec": 0, 00:21:21.283 "reconnect_delay_sec": 0, 00:21:21.283 "fast_io_fail_timeout_sec": 0, 00:21:21.283 "psk": "key0", 00:21:21.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.283 "hdgst": false, 00:21:21.283 "ddgst": false 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_nvme_set_hotplug", 00:21:21.283 "params": { 00:21:21.283 "period_us": 100000, 00:21:21.283 "enable": false 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_enable_histogram", 00:21:21.283 "params": { 00:21:21.283 "name": "nvme0n1", 00:21:21.283 "enable": true 00:21:21.283 } 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "method": "bdev_wait_for_examine" 00:21:21.283 } 00:21:21.283 ] 00:21:21.283 }, 00:21:21.283 { 00:21:21.283 "subsystem": "nbd", 00:21:21.283 "config": [] 00:21:21.283 } 00:21:21.283 ] 00:21:21.283 }' 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 696396 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 696396 ']' 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 696396 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 696396 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 696396' 00:21:21.283 killing process with pid 696396 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 696396 00:21:21.283 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.283 00:21:21.283 Latency(us) 00:21:21.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.283 =================================================================================================================== 00:21:21.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.283 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 696396 00:21:21.543 12:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 696215 00:21:21.544 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 696215 ']' 00:21:21.544 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 696215 00:21:21.544 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:21.544 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:21.544 12:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 696215 00:21:21.544 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:21.544 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:21.544 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 696215' 00:21:21.544 killing process with pid 696215 00:21:21.544 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 696215 00:21:21.544 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 696215 00:21:21.804 12:25:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:21.804 12:25:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.804 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:21.804 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.804 12:25:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:21.804 "subsystems": [ 00:21:21.804 { 00:21:21.804 "subsystem": "keyring", 00:21:21.804 "config": [ 00:21:21.804 { 00:21:21.804 "method": "keyring_file_add_key", 00:21:21.804 "params": { 00:21:21.804 "name": "key0", 00:21:21.804 "path": "/tmp/tmp.pDBy2MD2Qr" 00:21:21.804 } 00:21:21.804 } 00:21:21.804 ] 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "subsystem": "iobuf", 00:21:21.804 "config": [ 00:21:21.804 { 00:21:21.804 "method": "iobuf_set_options", 00:21:21.804 "params": { 00:21:21.804 "small_pool_count": 8192, 00:21:21.804 "large_pool_count": 1024, 00:21:21.804 "small_bufsize": 8192, 00:21:21.804 "large_bufsize": 135168 00:21:21.804 } 00:21:21.804 } 00:21:21.804 ] 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "subsystem": "sock", 00:21:21.804 "config": [ 00:21:21.804 { 00:21:21.804 "method": "sock_set_default_impl", 00:21:21.804 "params": { 00:21:21.804 "impl_name": "posix" 00:21:21.804 } 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "method": "sock_impl_set_options", 00:21:21.804 "params": { 00:21:21.804 "impl_name": "ssl", 00:21:21.804 "recv_buf_size": 4096, 00:21:21.804 "send_buf_size": 4096, 00:21:21.804 "enable_recv_pipe": true, 00:21:21.804 "enable_quickack": false, 00:21:21.804 "enable_placement_id": 0, 00:21:21.804 "enable_zerocopy_send_server": true, 00:21:21.804 "enable_zerocopy_send_client": false, 00:21:21.804 "zerocopy_threshold": 0, 00:21:21.804 "tls_version": 0, 00:21:21.804 "enable_ktls": false 00:21:21.804 } 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "method": "sock_impl_set_options", 00:21:21.804 "params": { 00:21:21.804 "impl_name": "posix", 00:21:21.804 "recv_buf_size": 2097152, 00:21:21.804 "send_buf_size": 2097152, 00:21:21.804 "enable_recv_pipe": true, 00:21:21.804 "enable_quickack": false, 00:21:21.804 "enable_placement_id": 0, 00:21:21.804 "enable_zerocopy_send_server": true, 00:21:21.804 "enable_zerocopy_send_client": false, 00:21:21.804 "zerocopy_threshold": 0, 00:21:21.804 "tls_version": 0, 00:21:21.804 "enable_ktls": false 00:21:21.804 } 00:21:21.804 } 00:21:21.804 ] 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "subsystem": "vmd", 00:21:21.804 "config": [] 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "subsystem": "accel", 00:21:21.804 "config": [ 00:21:21.804 { 00:21:21.804 "method": "accel_set_options", 00:21:21.804 "params": { 00:21:21.804 "small_cache_size": 128, 00:21:21.804 "large_cache_size": 16, 00:21:21.804 "task_count": 2048, 00:21:21.804 "sequence_count": 2048, 00:21:21.804 "buf_count": 2048 00:21:21.804 } 00:21:21.804 } 00:21:21.804 ] 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "subsystem": "bdev", 00:21:21.804 "config": [ 00:21:21.804 { 00:21:21.804 "method": "bdev_set_options", 00:21:21.804 "params": { 00:21:21.804 "bdev_io_pool_size": 65535, 00:21:21.804 "bdev_io_cache_size": 256, 00:21:21.804 "bdev_auto_examine": true, 00:21:21.804 "iobuf_small_cache_size": 128, 00:21:21.804 "iobuf_large_cache_size": 16 00:21:21.804 } 00:21:21.804 }, 00:21:21.804 { 00:21:21.804 "method": "bdev_raid_set_options", 00:21:21.804 "params": { 00:21:21.804 "process_window_size_kb": 1024 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "bdev_iscsi_set_options", 00:21:21.805 "params": { 00:21:21.805 "timeout_sec": 30 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "bdev_nvme_set_options", 00:21:21.805 "params": { 00:21:21.805 "action_on_timeout": "none", 00:21:21.805 "timeout_us": 0, 00:21:21.805 "timeout_admin_us": 0, 00:21:21.805 "keep_alive_timeout_ms": 10000, 00:21:21.805 "arbitration_burst": 0, 00:21:21.805 "low_priority_weight": 0, 00:21:21.805 "medium_priority_weight": 0, 00:21:21.805 "high_priority_weight": 0, 00:21:21.805 "nvme_adminq_poll_period_us": 10000, 00:21:21.805 "nvme_ioq_poll_period_us": 0, 00:21:21.805 "io_queue_requests": 0, 00:21:21.805 "delay_cmd_submit": true, 00:21:21.805 "transport_retry_count": 4, 00:21:21.805 "bdev_retry_count": 3, 00:21:21.805 "transport_ack_timeout": 0, 00:21:21.805 "ctrlr_loss_timeout_sec": 0, 00:21:21.805 "reconnect_delay_sec": 0, 00:21:21.805 "fast_io_fail_timeout_sec": 0, 00:21:21.805 "disable_auto_failback": false, 00:21:21.805 "generate_uuids": false, 00:21:21.805 "transport_tos": 0, 00:21:21.805 "nvme_error_stat": false, 00:21:21.805 "rdma_srq_size": 0, 00:21:21.805 "io_path_stat": false, 00:21:21.805 "allow_accel_sequence": false, 00:21:21.805 "rdma_max_cq_size": 0, 00:21:21.805 "rdma_cm_event_timeout_ms": 0, 00:21:21.805 "dhchap_digests": [ 00:21:21.805 "sha256", 00:21:21.805 "sha384", 00:21:21.805 "sha512" 00:21:21.805 ], 00:21:21.805 "dhchap_dhgroups": [ 00:21:21.805 "null", 00:21:21.805 "ffdhe2048", 00:21:21.805 "ffdhe3072", 00:21:21.805 "ffdhe4096", 00:21:21.805 "ffdhe6144", 00:21:21.805 "ffdhe8192" 00:21:21.805 ] 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "bdev_nvme_set_hotplug", 00:21:21.805 "params": { 00:21:21.805 "period_us": 100000, 00:21:21.805 "enable": false 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "bdev_malloc_create", 00:21:21.805 "params": { 00:21:21.805 "name": "malloc0", 00:21:21.805 "num_blocks": 8192, 00:21:21.805 "block_size": 4096, 00:21:21.805 "physical_block_size": 4096, 00:21:21.805 "uuid": "471a406b-24a3-4360-8135-3ddfa2647f88", 00:21:21.805 "optimal_io_boundary": 0 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "bdev_wait_for_examine" 00:21:21.805 } 00:21:21.805 ] 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "subsystem": "nbd", 00:21:21.805 "config": [] 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "subsystem": "scheduler", 00:21:21.805 "config": [ 00:21:21.805 { 00:21:21.805 "method": "framework_set_scheduler", 00:21:21.805 "params": { 00:21:21.805 "name": "static" 00:21:21.805 } 00:21:21.805 } 00:21:21.805 ] 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "subsystem": "nvmf", 00:21:21.805 "config": [ 00:21:21.805 { 00:21:21.805 "method": "nvmf_set_config", 00:21:21.805 "params": { 00:21:21.805 "discovery_filter": "match_any", 00:21:21.805 "admin_cmd_passthru": { 00:21:21.805 "identify_ctrlr": false 00:21:21.805 } 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_set_max_subsystems", 00:21:21.805 "params": { 00:21:21.805 "max_subsystems": 1024 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_set_crdt", 00:21:21.805 "params": { 00:21:21.805 "crdt1": 0, 00:21:21.805 "crdt2": 0, 00:21:21.805 "crdt3": 0 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_create_transport", 00:21:21.805 "params": { 00:21:21.805 "trtype": "TCP", 00:21:21.805 "max_queue_depth": 128, 00:21:21.805 "max_io_qpairs_per_ctrlr": 127, 00:21:21.805 "in_capsule_data_size": 4096, 00:21:21.805 "max_io_size": 131072, 00:21:21.805 "io_unit_size": 131072, 00:21:21.805 "max_aq_depth": 128, 00:21:21.805 "num_shared_buffers": 511, 00:21:21.805 "buf_cache_size": 4294967295, 00:21:21.805 "dif_insert_or_strip": false, 00:21:21.805 "zcopy": false, 00:21:21.805 "c2h_success": false, 00:21:21.805 "sock_priority": 0, 00:21:21.805 "abort_timeout_sec": 1, 00:21:21.805 "ack_timeout": 0, 00:21:21.805 "data_wr_pool_size": 0 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_create_subsystem", 00:21:21.805 "params": { 00:21:21.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.805 "allow_any_host": false, 00:21:21.805 "serial_number": "00000000000000000000", 00:21:21.805 "model_number": "SPDK bdev Controller", 00:21:21.805 "max_namespaces": 32, 00:21:21.805 "min_cntlid": 1, 00:21:21.805 "max_cntlid": 65519, 00:21:21.805 "ana_reporting": false 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_subsystem_add_host", 00:21:21.805 "params": { 00:21:21.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.805 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.805 "psk": "key0" 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_subsystem_add_ns", 00:21:21.805 "params": { 00:21:21.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.805 "namespace": { 00:21:21.805 "nsid": 1, 00:21:21.805 "bdev_name": "malloc0", 00:21:21.805 "nguid": "471A406B24A3436081353DDFA2647F88", 00:21:21.805 "uuid": "471a406b-24a3-4360-8135-3ddfa2647f88", 00:21:21.805 "no_auto_visible": false 00:21:21.805 } 00:21:21.805 } 00:21:21.805 }, 00:21:21.805 { 00:21:21.805 "method": "nvmf_subsystem_add_listener", 00:21:21.805 "params": { 00:21:21.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.805 "listen_address": { 00:21:21.805 "trtype": "TCP", 00:21:21.805 "adrfam": "IPv4", 00:21:21.805 "traddr": "10.0.0.2", 00:21:21.805 "trsvcid": "4420" 00:21:21.805 }, 00:21:21.805 "secure_channel": true 00:21:21.805 } 00:21:21.805 } 00:21:21.805 ] 00:21:21.805 } 00:21:21.805 ] 00:21:21.805 }' 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=696931 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 696931 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 696931 ']' 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:21.805 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.805 [2024-06-10 12:25:27.222387] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:21.805 [2024-06-10 12:25:27.222443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.805 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.805 [2024-06-10 12:25:27.294222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.805 [2024-06-10 12:25:27.359822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.805 [2024-06-10 12:25:27.359857] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.805 [2024-06-10 12:25:27.359865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.805 [2024-06-10 12:25:27.359871] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.805 [2024-06-10 12:25:27.359877] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.805 [2024-06-10 12:25:27.359926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.066 [2024-06-10 12:25:27.556678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.066 [2024-06-10 12:25:27.588679] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.066 [2024-06-10 12:25:27.604355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.641 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:22.641 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:22.641 12:25:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.641 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:22.641 12:25:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.641 12:25:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.641 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=697274 00:21:22.641 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 697274 /var/tmp/bdevperf.sock 00:21:22.641 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 697274 ']' 00:21:22.641 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.642 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:22.642 "subsystems": [ 00:21:22.642 { 00:21:22.642 "subsystem": "keyring", 00:21:22.642 "config": [ 00:21:22.642 { 00:21:22.642 "method": "keyring_file_add_key", 00:21:22.642 "params": { 00:21:22.642 "name": "key0", 00:21:22.642 "path": "/tmp/tmp.pDBy2MD2Qr" 00:21:22.642 } 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "iobuf", 00:21:22.642 "config": [ 00:21:22.642 { 00:21:22.642 "method": "iobuf_set_options", 00:21:22.642 "params": { 00:21:22.642 "small_pool_count": 8192, 00:21:22.642 "large_pool_count": 1024, 00:21:22.642 "small_bufsize": 8192, 00:21:22.642 "large_bufsize": 135168 00:21:22.642 } 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "sock", 00:21:22.642 "config": [ 00:21:22.642 { 00:21:22.642 "method": "sock_set_default_impl", 00:21:22.642 "params": { 00:21:22.642 "impl_name": "posix" 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "sock_impl_set_options", 00:21:22.642 "params": { 00:21:22.642 "impl_name": "ssl", 00:21:22.642 "recv_buf_size": 4096, 00:21:22.642 "send_buf_size": 4096, 00:21:22.642 "enable_recv_pipe": true, 00:21:22.642 "enable_quickack": false, 00:21:22.642 "enable_placement_id": 0, 00:21:22.642 "enable_zerocopy_send_server": true, 00:21:22.642 "enable_zerocopy_send_client": false, 00:21:22.642 "zerocopy_threshold": 0, 00:21:22.642 "tls_version": 0, 00:21:22.642 "enable_ktls": false 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "sock_impl_set_options", 00:21:22.642 "params": { 00:21:22.642 "impl_name": "posix", 00:21:22.642 "recv_buf_size": 2097152, 00:21:22.642 "send_buf_size": 2097152, 00:21:22.642 "enable_recv_pipe": true, 00:21:22.642 "enable_quickack": false, 00:21:22.642 "enable_placement_id": 0, 00:21:22.642 "enable_zerocopy_send_server": true, 00:21:22.642 "enable_zerocopy_send_client": false, 00:21:22.642 "zerocopy_threshold": 0, 00:21:22.642 "tls_version": 0, 00:21:22.642 "enable_ktls": false 00:21:22.642 } 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "vmd", 00:21:22.642 "config": [] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "accel", 00:21:22.642 "config": [ 00:21:22.642 { 00:21:22.642 "method": "accel_set_options", 00:21:22.642 "params": { 00:21:22.642 "small_cache_size": 128, 00:21:22.642 "large_cache_size": 16, 00:21:22.642 "task_count": 2048, 00:21:22.642 "sequence_count": 2048, 00:21:22.642 "buf_count": 2048 00:21:22.642 } 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "bdev", 00:21:22.642 "config": [ 00:21:22.642 { 00:21:22.642 "method": "bdev_set_options", 00:21:22.642 "params": { 00:21:22.642 "bdev_io_pool_size": 65535, 00:21:22.642 "bdev_io_cache_size": 256, 00:21:22.642 "bdev_auto_examine": true, 00:21:22.642 "iobuf_small_cache_size": 128, 00:21:22.642 "iobuf_large_cache_size": 16 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_raid_set_options", 00:21:22.642 "params": { 00:21:22.642 "process_window_size_kb": 1024 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_iscsi_set_options", 00:21:22.642 "params": { 00:21:22.642 "timeout_sec": 30 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_nvme_set_options", 00:21:22.642 "params": { 00:21:22.642 "action_on_timeout": "none", 00:21:22.642 "timeout_us": 0, 00:21:22.642 "timeout_admin_us": 0, 00:21:22.642 "keep_alive_timeout_ms": 10000, 00:21:22.642 "arbitration_burst": 0, 00:21:22.642 "low_priority_weight": 0, 00:21:22.642 "medium_priority_weight": 0, 00:21:22.642 "high_priority_weight": 0, 00:21:22.642 "nvme_adminq_poll_period_us": 10000, 00:21:22.642 "nvme_ioq_poll_period_us": 0, 00:21:22.642 "io_queue_requests": 512, 00:21:22.642 "delay_cmd_submit": true, 00:21:22.642 "transport_retry_count": 4, 00:21:22.642 "bdev_retry_count": 3, 00:21:22.642 "transport_ack_timeout": 0, 00:21:22.642 "ctrlr_loss_timeout_sec": 0, 00:21:22.642 "reconnect_delay_sec": 0, 00:21:22.642 "fast_io_fail_timeout_sec": 0, 00:21:22.642 "disable_auto_failback": false, 00:21:22.642 "generate_uuids": false, 00:21:22.642 "transport_tos": 0, 00:21:22.642 "nvme_error_stat": false, 00:21:22.642 "rdma_srq_size": 0, 00:21:22.642 "io_path_stat": false, 00:21:22.642 "allow_accel_sequence": false, 00:21:22.642 "rdma_max_cq_size": 0, 00:21:22.642 "rdma_cm_event_timeout_ms": 0, 00:21:22.642 "dhchap_digests": [ 00:21:22.642 "sha256", 00:21:22.642 "sha384", 00:21:22.642 "sha512" 00:21:22.642 ], 00:21:22.642 "dhchap_dhgroups": [ 00:21:22.642 "null", 00:21:22.642 "ffdhe2048", 00:21:22.642 "ffdhe3072", 00:21:22.642 "ffdhe4096", 00:21:22.642 "ffdhe6144", 00:21:22.642 "ffdhe8192" 00:21:22.642 ] 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_nvme_attach_controller", 00:21:22.642 "params": { 00:21:22.642 "name": "nvme0", 00:21:22.642 "trtype": "TCP", 00:21:22.642 "adrfam": "IPv4", 00:21:22.642 "traddr": "10.0.0.2", 00:21:22.642 "trsvcid": "4420", 00:21:22.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.642 "prchk_reftag": false, 00:21:22.642 "prchk_guard": false, 00:21:22.642 "ctrlr_loss_timeout_sec": 0, 00:21:22.642 "reconnect_delay_sec": 0, 00:21:22.642 "fast_io_fail_timeout_sec": 0, 00:21:22.642 "psk": "key0", 00:21:22.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.642 "hdgst": false, 00:21:22.642 "ddgst": false 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_nvme_set_hotplug", 00:21:22.642 "params": { 00:21:22.642 "period_us": 100000, 00:21:22.642 "enable": false 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_enable_histogram", 00:21:22.642 "params": { 00:21:22.642 "name": "nvme0n1", 00:21:22.642 "enable": true 00:21:22.642 } 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "method": "bdev_wait_for_examine" 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }, 00:21:22.642 { 00:21:22.642 "subsystem": "nbd", 00:21:22.642 "config": [] 00:21:22.642 } 00:21:22.642 ] 00:21:22.642 }' 00:21:22.642 [2024-06-10 12:25:28.069604] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:22.642 [2024-06-10 12:25:28.069655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697274 ] 00:21:22.642 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.642 [2024-06-10 12:25:28.150046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.642 [2024-06-10 12:25:28.203312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.946 [2024-06-10 12:25:28.336044] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.516 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:23.516 12:25:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:23.517 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.517 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:23.517 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.517 12:25:28 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.517 Running I/O for 1 seconds... 00:21:24.901 00:21:24.901 Latency(us) 00:21:24.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.901 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.901 Verification LBA range: start 0x0 length 0x2000 00:21:24.901 nvme0n1 : 1.02 6055.67 23.65 0.00 0.00 20958.54 6307.84 29709.65 00:21:24.901 =================================================================================================================== 00:21:24.901 Total : 6055.67 23.65 0.00 0.00 20958.54 6307.84 29709.65 00:21:24.901 0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:24.901 nvmf_trace.0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:21:24.901 12:25:30 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 697274 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 697274 ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 697274 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 697274 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 697274' 00:21:24.902 killing process with pid 697274 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 697274 00:21:24.902 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.902 00:21:24.902 Latency(us) 00:21:24.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.902 =================================================================================================================== 00:21:24.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 697274 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.902 rmmod nvme_tcp 00:21:24.902 rmmod nvme_fabrics 00:21:24.902 rmmod nvme_keyring 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 696931 ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 696931 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 696931 ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 696931 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 696931 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 696931' 00:21:24.902 killing process with pid 696931 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 696931 00:21:24.902 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 696931 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.163 12:25:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.719 12:25:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.719 12:25:32 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3WFAGCT3FB /tmp/tmp.muRksRrqZt /tmp/tmp.pDBy2MD2Qr 00:21:27.719 00:21:27.719 real 1m24.255s 00:21:27.719 user 2m9.533s 00:21:27.719 sys 0m26.514s 00:21:27.719 12:25:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:27.719 12:25:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 ************************************ 00:21:27.719 END TEST nvmf_tls 00:21:27.719 ************************************ 00:21:27.719 12:25:32 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.719 12:25:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:27.719 12:25:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:27.719 12:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 ************************************ 00:21:27.719 START TEST nvmf_fips 00:21:27.719 ************************************ 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.719 * Looking for test storage... 00:21:27.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:27.719 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:27.720 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:27.720 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:27.720 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:27.720 12:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:27.720 Error setting digest 00:21:27.720 00226E4FD07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:27.720 00226E4FD07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.720 12:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:35.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:35.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.874 12:25:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.874 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.874 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.874 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:35.875 Found net devices under 0000:31:00.0: cvl_0_0 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:35.875 Found net devices under 0000:31:00.1: cvl_0_1 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:21:35.875 00:21:35.875 --- 10.0.0.2 ping statistics --- 00:21:35.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.875 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:21:35.875 00:21:35.875 --- 10.0.0.1 ping statistics --- 00:21:35.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.875 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=702351 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 702351 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 702351 ']' 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.875 12:25:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.875 [2024-06-10 12:25:41.420308] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:35.875 [2024-06-10 12:25:41.420377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.875 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.135 [2024-06-10 12:25:41.514684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.135 [2024-06-10 12:25:41.607911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.135 [2024-06-10 12:25:41.607968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.135 [2024-06-10 12:25:41.607977] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.135 [2024-06-10 12:25:41.607983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.135 [2024-06-10 12:25:41.607989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.135 [2024-06-10 12:25:41.608013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.706 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.967 [2024-06-10 12:25:42.379798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.967 [2024-06-10 12:25:42.395796] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.967 [2024-06-10 12:25:42.396058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.967 [2024-06-10 12:25:42.425996] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:36.967 malloc0 00:21:36.967 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.967 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=702687 00:21:36.967 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 702687 /var/tmp/bdevperf.sock 00:21:36.967 12:25:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 702687 ']' 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:36.968 12:25:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.968 [2024-06-10 12:25:42.527529] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:21:36.968 [2024-06-10 12:25:42.527597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702687 ] 00:21:36.968 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.228 [2024-06-10 12:25:42.588472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.228 [2024-06-10 12:25:42.652456] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.801 12:25:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:37.801 12:25:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:37.801 12:25:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:38.062 [2024-06-10 12:25:43.411583] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.062 [2024-06-10 12:25:43.411649] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:38.062 TLSTESTn1 00:21:38.062 12:25:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.062 Running I/O for 10 seconds... 00:21:48.066 00:21:48.066 Latency(us) 00:21:48.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:48.066 Verification LBA range: start 0x0 length 0x2000 00:21:48.066 TLSTESTn1 : 10.01 6505.28 25.41 0.00 0.00 19646.69 4696.75 62040.75 00:21:48.066 =================================================================================================================== 00:21:48.067 Total : 6505.28 25.41 0.00 0.00 19646.69 4696.75 62040.75 00:21:48.067 0 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:48.067 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:48.067 nvmf_trace.0 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 702687 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 702687 ']' 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 702687 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 702687 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 702687' 00:21:48.327 killing process with pid 702687 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 702687 00:21:48.327 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.327 00:21:48.327 Latency(us) 00:21:48.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.327 =================================================================================================================== 00:21:48.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.327 [2024-06-10 12:25:53.794998] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 702687 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.327 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.327 rmmod nvme_tcp 00:21:48.587 rmmod nvme_fabrics 00:21:48.587 rmmod nvme_keyring 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 702351 ']' 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 702351 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 702351 ']' 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 702351 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:48.587 12:25:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 702351 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 702351' 00:21:48.587 killing process with pid 702351 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 702351 00:21:48.587 [2024-06-10 12:25:54.047496] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 702351 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.587 12:25:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.131 12:25:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.131 12:25:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.131 00:21:51.131 real 0m23.459s 00:21:51.131 user 0m24.947s 00:21:51.131 sys 0m9.152s 00:21:51.131 12:25:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:51.131 12:25:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:51.131 ************************************ 00:21:51.131 END TEST nvmf_fips 00:21:51.131 ************************************ 00:21:51.131 12:25:56 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:51.131 12:25:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:51.131 12:25:56 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:51.131 12:25:56 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:51.131 12:25:56 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.131 12:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.301 12:26:03 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:59.302 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:59.302 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:59.302 Found net devices under 0000:31:00.0: cvl_0_0 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:59.302 Found net devices under 0000:31:00.1: cvl_0_1 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:59.302 12:26:03 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:59.302 12:26:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:59.302 12:26:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:59.302 12:26:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.302 ************************************ 00:21:59.302 START TEST nvmf_perf_adq 00:21:59.302 ************************************ 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:59.302 * Looking for test storage... 00:21:59.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.302 12:26:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:07.447 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:07.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:07.447 Found net devices under 0000:31:00.0: cvl_0_0 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:07.447 Found net devices under 0000:31:00.1: cvl_0_1 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:07.447 12:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:07.447 12:26:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:09.362 12:26:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:14.685 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:14.686 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:14.686 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:14.686 Found net devices under 0000:31:00.0: cvl_0_0 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:14.686 Found net devices under 0000:31:00.1: cvl_0_1 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.686 12:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:22:14.686 00:22:14.686 --- 10.0.0.2 ping statistics --- 00:22:14.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.686 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:22:14.686 00:22:14.686 --- 10.0.0.1 ping statistics --- 00:22:14.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.686 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=715445 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 715445 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 715445 ']' 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:14.686 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.686 [2024-06-10 12:26:20.162548] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:22:14.686 [2024-06-10 12:26:20.162615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.686 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.686 [2024-06-10 12:26:20.241571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.947 [2024-06-10 12:26:20.318844] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.947 [2024-06-10 12:26:20.318883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.947 [2024-06-10 12:26:20.318894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.947 [2024-06-10 12:26:20.318900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.947 [2024-06-10 12:26:20.318905] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.947 [2024-06-10 12:26:20.319043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.947 [2024-06-10 12:26:20.319180] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.947 [2024-06-10 12:26:20.319335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.947 [2024-06-10 12:26:20.319336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:15.518 12:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:15.519 12:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:15.519 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.519 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 12:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.519 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 [2024-06-10 12:26:21.117139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.780 Malloc1 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.780 [2024-06-10 12:26:21.176417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=715654 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:15.780 12:26:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:17.705 "tick_rate": 2400000000, 00:22:17.705 "poll_groups": [ 00:22:17.705 { 00:22:17.705 "name": "nvmf_tgt_poll_group_000", 00:22:17.705 "admin_qpairs": 1, 00:22:17.705 "io_qpairs": 1, 00:22:17.705 "current_admin_qpairs": 1, 00:22:17.705 "current_io_qpairs": 1, 00:22:17.705 "pending_bdev_io": 0, 00:22:17.705 "completed_nvme_io": 20320, 00:22:17.705 "transports": [ 00:22:17.705 { 00:22:17.705 "trtype": "TCP" 00:22:17.705 } 00:22:17.705 ] 00:22:17.705 }, 00:22:17.705 { 00:22:17.705 "name": "nvmf_tgt_poll_group_001", 00:22:17.705 "admin_qpairs": 0, 00:22:17.705 "io_qpairs": 1, 00:22:17.705 "current_admin_qpairs": 0, 00:22:17.705 "current_io_qpairs": 1, 00:22:17.705 "pending_bdev_io": 0, 00:22:17.705 "completed_nvme_io": 28016, 00:22:17.705 "transports": [ 00:22:17.705 { 00:22:17.705 "trtype": "TCP" 00:22:17.705 } 00:22:17.705 ] 00:22:17.705 }, 00:22:17.705 { 00:22:17.705 "name": "nvmf_tgt_poll_group_002", 00:22:17.705 "admin_qpairs": 0, 00:22:17.705 "io_qpairs": 1, 00:22:17.705 "current_admin_qpairs": 0, 00:22:17.705 "current_io_qpairs": 1, 00:22:17.705 "pending_bdev_io": 0, 00:22:17.705 "completed_nvme_io": 21619, 00:22:17.705 "transports": [ 00:22:17.705 { 00:22:17.705 "trtype": "TCP" 00:22:17.705 } 00:22:17.705 ] 00:22:17.705 }, 00:22:17.705 { 00:22:17.705 "name": "nvmf_tgt_poll_group_003", 00:22:17.705 "admin_qpairs": 0, 00:22:17.705 "io_qpairs": 1, 00:22:17.705 "current_admin_qpairs": 0, 00:22:17.705 "current_io_qpairs": 1, 00:22:17.705 "pending_bdev_io": 0, 00:22:17.705 "completed_nvme_io": 20798, 00:22:17.705 "transports": [ 00:22:17.705 { 00:22:17.705 "trtype": "TCP" 00:22:17.705 } 00:22:17.705 ] 00:22:17.705 } 00:22:17.705 ] 00:22:17.705 }' 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:17.705 12:26:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 715654 00:22:25.883 Initializing NVMe Controllers 00:22:25.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:25.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:25.883 Initialization complete. Launching workers. 00:22:25.883 ======================================================== 00:22:25.883 Latency(us) 00:22:25.883 Device Information : IOPS MiB/s Average min max 00:22:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11505.00 44.94 5564.14 1175.74 9504.57 00:22:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14670.00 57.30 4375.58 1172.93 45925.66 00:22:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14369.00 56.13 4454.51 1138.66 10793.80 00:22:25.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13967.50 54.56 4582.19 1138.20 11672.29 00:22:25.883 ======================================================== 00:22:25.883 Total : 54511.48 212.94 4700.18 1138.20 45925.66 00:22:25.883 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.883 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.884 rmmod nvme_tcp 00:22:25.884 rmmod nvme_fabrics 00:22:25.884 rmmod nvme_keyring 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 715445 ']' 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 715445 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 715445 ']' 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 715445 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:25.884 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 715445 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 715445' 00:22:26.148 killing process with pid 715445 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 715445 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 715445 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.148 12:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.693 12:26:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.693 12:26:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:28.693 12:26:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:30.079 12:26:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:31.995 12:26:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.287 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:37.288 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:37.288 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:37.288 Found net devices under 0000:31:00.0: cvl_0_0 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:37.288 Found net devices under 0000:31:00.1: cvl_0_1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:22:37.288 00:22:37.288 --- 10.0.0.2 ping statistics --- 00:22:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.288 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:22:37.288 00:22:37.288 --- 10.0.0.1 ping statistics --- 00:22:37.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.288 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:37.288 net.core.busy_poll = 1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:37.288 net.core.busy_read = 1 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:37.288 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=720338 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 720338 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 720338 ']' 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:37.289 12:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.289 [2024-06-10 12:26:42.824076] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:22:37.289 [2024-06-10 12:26:42.824163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.289 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.550 [2024-06-10 12:26:42.902473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.550 [2024-06-10 12:26:42.977130] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.550 [2024-06-10 12:26:42.977168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.550 [2024-06-10 12:26:42.977175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.550 [2024-06-10 12:26:42.977181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.550 [2024-06-10 12:26:42.977187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.550 [2024-06-10 12:26:42.977323] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.550 [2024-06-10 12:26:42.977447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.550 [2024-06-10 12:26:42.977605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.550 [2024-06-10 12:26:42.977605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.123 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 [2024-06-10 12:26:43.761494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 Malloc1 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 [2024-06-10 12:26:43.818243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=720468 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:38.384 12:26:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:38.384 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:40.296 "tick_rate": 2400000000, 00:22:40.296 "poll_groups": [ 00:22:40.296 { 00:22:40.296 "name": "nvmf_tgt_poll_group_000", 00:22:40.296 "admin_qpairs": 1, 00:22:40.296 "io_qpairs": 2, 00:22:40.296 "current_admin_qpairs": 1, 00:22:40.296 "current_io_qpairs": 2, 00:22:40.296 "pending_bdev_io": 0, 00:22:40.296 "completed_nvme_io": 30203, 00:22:40.296 "transports": [ 00:22:40.296 { 00:22:40.296 "trtype": "TCP" 00:22:40.296 } 00:22:40.296 ] 00:22:40.296 }, 00:22:40.296 { 00:22:40.296 "name": "nvmf_tgt_poll_group_001", 00:22:40.296 "admin_qpairs": 0, 00:22:40.296 "io_qpairs": 2, 00:22:40.296 "current_admin_qpairs": 0, 00:22:40.296 "current_io_qpairs": 2, 00:22:40.296 "pending_bdev_io": 0, 00:22:40.296 "completed_nvme_io": 40586, 00:22:40.296 "transports": [ 00:22:40.296 { 00:22:40.296 "trtype": "TCP" 00:22:40.296 } 00:22:40.296 ] 00:22:40.296 }, 00:22:40.296 { 00:22:40.296 "name": "nvmf_tgt_poll_group_002", 00:22:40.296 "admin_qpairs": 0, 00:22:40.296 "io_qpairs": 0, 00:22:40.296 "current_admin_qpairs": 0, 00:22:40.296 "current_io_qpairs": 0, 00:22:40.296 "pending_bdev_io": 0, 00:22:40.296 "completed_nvme_io": 0, 00:22:40.296 "transports": [ 00:22:40.296 { 00:22:40.296 "trtype": "TCP" 00:22:40.296 } 00:22:40.296 ] 00:22:40.296 }, 00:22:40.296 { 00:22:40.296 "name": "nvmf_tgt_poll_group_003", 00:22:40.296 "admin_qpairs": 0, 00:22:40.296 "io_qpairs": 0, 00:22:40.296 "current_admin_qpairs": 0, 00:22:40.296 "current_io_qpairs": 0, 00:22:40.296 "pending_bdev_io": 0, 00:22:40.296 "completed_nvme_io": 0, 00:22:40.296 "transports": [ 00:22:40.296 { 00:22:40.296 "trtype": "TCP" 00:22:40.296 } 00:22:40.296 ] 00:22:40.296 } 00:22:40.296 ] 00:22:40.296 }' 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:40.296 12:26:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 720468 00:22:48.435 Initializing NVMe Controllers 00:22:48.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:48.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:48.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:48.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:48.435 Initialization complete. Launching workers. 00:22:48.435 ======================================================== 00:22:48.435 Latency(us) 00:22:48.435 Device Information : IOPS MiB/s Average min max 00:22:48.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9800.70 38.28 6557.47 1120.80 50003.28 00:22:48.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9167.30 35.81 6995.24 981.17 53974.14 00:22:48.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11187.20 43.70 5736.71 892.98 51776.71 00:22:48.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10889.10 42.54 5876.88 916.00 49729.86 00:22:48.435 ======================================================== 00:22:48.435 Total : 41044.30 160.33 6250.98 892.98 53974.14 00:22:48.435 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.435 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.435 rmmod nvme_tcp 00:22:48.696 rmmod nvme_fabrics 00:22:48.696 rmmod nvme_keyring 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 720338 ']' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 720338 ']' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 720338' 00:22:48.696 killing process with pid 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 720338 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.696 12:26:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.998 12:26:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.998 12:26:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:51.998 00:22:51.998 real 0m53.709s 00:22:51.998 user 2m49.458s 00:22:51.998 sys 0m11.393s 00:22:51.998 12:26:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:51.998 12:26:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.998 ************************************ 00:22:51.998 END TEST nvmf_perf_adq 00:22:51.998 ************************************ 00:22:51.998 12:26:57 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.998 12:26:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:51.998 12:26:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:51.998 12:26:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.998 ************************************ 00:22:51.998 START TEST nvmf_shutdown 00:22:51.998 ************************************ 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.998 * Looking for test storage... 00:22:51.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.998 12:26:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.999 12:26:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.999 12:26:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:51.999 12:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:51.999 12:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:51.999 12:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.267 ************************************ 00:22:52.267 START TEST nvmf_shutdown_tc1 00:22:52.267 ************************************ 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.267 12:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:00.453 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:00.453 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.453 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:00.454 Found net devices under 0000:31:00.0: cvl_0_0 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:00.454 Found net devices under 0000:31:00.1: cvl_0_1 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:23:00.454 00:23:00.454 --- 10.0.0.2 ping statistics --- 00:23:00.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.454 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:23:00.454 00:23:00.454 --- 10.0.0.1 ping statistics --- 00:23:00.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.454 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=727479 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 727479 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 727479 ']' 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.454 12:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.454 [2024-06-10 12:27:05.410684] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:00.454 [2024-06-10 12:27:05.410750] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.454 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.454 [2024-06-10 12:27:05.508774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.454 [2024-06-10 12:27:05.606273] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.454 [2024-06-10 12:27:05.606333] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.454 [2024-06-10 12:27:05.606342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.454 [2024-06-10 12:27:05.606348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.454 [2024-06-10 12:27:05.606355] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.454 [2024-06-10 12:27:05.606491] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.454 [2024-06-10 12:27:05.606630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.454 [2024-06-10 12:27:05.606776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.454 [2024-06-10 12:27:05.606777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.715 [2024-06-10 12:27:06.224539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.715 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.716 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.716 Malloc1 00:23:00.975 [2024-06-10 12:27:06.327836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.975 Malloc2 00:23:00.975 Malloc3 00:23:00.975 Malloc4 00:23:00.975 Malloc5 00:23:00.975 Malloc6 00:23:00.975 Malloc7 00:23:01.236 Malloc8 00:23:01.236 Malloc9 00:23:01.236 Malloc10 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=727768 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 727768 /var/tmp/bdevperf.sock 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 727768 ']' 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.236 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 [2024-06-10 12:27:06.774533] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:01.237 [2024-06-10 12:27:06.774585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.237 } 00:23:01.237 EOF 00:23:01.237 )") 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.237 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.237 { 00:23:01.237 "params": { 00:23:01.237 "name": "Nvme$subsystem", 00:23:01.237 "trtype": "$TEST_TRANSPORT", 00:23:01.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.237 "adrfam": "ipv4", 00:23:01.237 "trsvcid": "$NVMF_PORT", 00:23:01.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.237 "hdgst": ${hdgst:-false}, 00:23:01.237 "ddgst": ${ddgst:-false} 00:23:01.237 }, 00:23:01.237 "method": "bdev_nvme_attach_controller" 00:23:01.238 } 00:23:01.238 EOF 00:23:01.238 )") 00:23:01.238 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.238 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.238 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:01.238 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.238 12:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme1", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme2", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme3", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme4", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme5", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme6", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme7", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme8", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme9", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 },{ 00:23:01.238 "params": { 00:23:01.238 "name": "Nvme10", 00:23:01.238 "trtype": "tcp", 00:23:01.238 "traddr": "10.0.0.2", 00:23:01.238 "adrfam": "ipv4", 00:23:01.238 "trsvcid": "4420", 00:23:01.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.238 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.238 "hdgst": false, 00:23:01.238 "ddgst": false 00:23:01.238 }, 00:23:01.238 "method": "bdev_nvme_attach_controller" 00:23:01.238 }' 00:23:01.238 [2024-06-10 12:27:06.831897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.498 [2024-06-10 12:27:06.887159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 727768 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:02.882 12:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:03.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 727768 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 727479 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 [2024-06-10 12:27:09.306854] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:03.827 [2024-06-10 12:27:09.306904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728648 ] 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.827 "trsvcid": "$NVMF_PORT", 00:23:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.827 "hdgst": ${hdgst:-false}, 00:23:03.827 "ddgst": ${ddgst:-false} 00:23:03.827 }, 00:23:03.827 "method": "bdev_nvme_attach_controller" 00:23:03.827 } 00:23:03.827 EOF 00:23:03.827 )") 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.827 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.827 { 00:23:03.827 "params": { 00:23:03.827 "name": "Nvme$subsystem", 00:23:03.827 "trtype": "$TEST_TRANSPORT", 00:23:03.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.827 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "$NVMF_PORT", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.828 "hdgst": ${hdgst:-false}, 00:23:03.828 "ddgst": ${ddgst:-false} 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 } 00:23:03.828 EOF 00:23:03.828 )") 00:23:03.828 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.828 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:03.828 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.828 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:03.828 12:27:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme1", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme2", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme3", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme4", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme5", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme6", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme7", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme8", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme9", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 },{ 00:23:03.828 "params": { 00:23:03.828 "name": "Nvme10", 00:23:03.828 "trtype": "tcp", 00:23:03.828 "traddr": "10.0.0.2", 00:23:03.828 "adrfam": "ipv4", 00:23:03.828 "trsvcid": "4420", 00:23:03.828 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.828 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.828 "hdgst": false, 00:23:03.828 "ddgst": false 00:23:03.828 }, 00:23:03.828 "method": "bdev_nvme_attach_controller" 00:23:03.828 }' 00:23:03.828 [2024-06-10 12:27:09.373090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.089 [2024-06-10 12:27:09.436822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.473 Running I/O for 1 seconds... 00:23:06.416 00:23:06.416 Latency(us) 00:23:06.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.416 Verification LBA range: start 0x0 length 0x400 00:23:06.416 Nvme1n1 : 1.15 222.95 13.93 0.00 0.00 283981.87 17694.72 234181.97 00:23:06.416 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.416 Verification LBA range: start 0x0 length 0x400 00:23:06.416 Nvme2n1 : 1.14 224.96 14.06 0.00 0.00 276970.45 22609.92 263891.63 00:23:06.416 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.416 Verification LBA range: start 0x0 length 0x400 00:23:06.416 Nvme3n1 : 1.14 227.84 14.24 0.00 0.00 267231.65 6908.59 242920.11 00:23:06.416 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.416 Verification LBA range: start 0x0 length 0x400 00:23:06.416 Nvme4n1 : 1.18 271.67 16.98 0.00 0.00 218339.75 10376.53 253405.87 00:23:06.416 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.416 Verification LBA range: start 0x0 length 0x400 00:23:06.416 Nvme5n1 : 1.18 216.11 13.51 0.00 0.00 274244.27 19333.12 253405.87 00:23:06.416 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.417 Verification LBA range: start 0x0 length 0x400 00:23:06.417 Nvme6n1 : 1.20 267.76 16.74 0.00 0.00 217474.56 20753.07 241172.48 00:23:06.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.417 Verification LBA range: start 0x0 length 0x400 00:23:06.417 Nvme7n1 : 1.15 222.26 13.89 0.00 0.00 256536.75 17913.17 248162.99 00:23:06.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.417 Verification LBA range: start 0x0 length 0x400 00:23:06.417 Nvme8n1 : 1.19 268.93 16.81 0.00 0.00 209043.46 19333.12 242920.11 00:23:06.417 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.417 Verification LBA range: start 0x0 length 0x400 00:23:06.417 Nvme9n1 : 1.19 268.39 16.77 0.00 0.00 205550.25 17257.81 242920.11 00:23:06.417 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.417 Verification LBA range: start 0x0 length 0x400 00:23:06.417 Nvme10n1 : 1.21 265.39 16.59 0.00 0.00 204538.71 11851.09 256901.12 00:23:06.417 =================================================================================================================== 00:23:06.417 Total : 2456.27 153.52 0.00 0.00 238053.72 6908.59 263891.63 00:23:06.678 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.679 rmmod nvme_tcp 00:23:06.679 rmmod nvme_fabrics 00:23:06.679 rmmod nvme_keyring 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 727479 ']' 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 727479 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 727479 ']' 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 727479 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 727479 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 727479' 00:23:06.679 killing process with pid 727479 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 727479 00:23:06.679 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 727479 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.940 12:27:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.487 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.487 00:23:09.487 real 0m16.852s 00:23:09.487 user 0m33.145s 00:23:09.487 sys 0m6.912s 00:23:09.487 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:09.487 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.487 ************************************ 00:23:09.487 END TEST nvmf_shutdown_tc1 00:23:09.487 ************************************ 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.488 ************************************ 00:23:09.488 START TEST nvmf_shutdown_tc2 00:23:09.488 ************************************ 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.488 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.488 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.488 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:23:09.489 00:23:09.489 --- 10.0.0.2 ping statistics --- 00:23:09.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.489 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:09.489 00:23:09.489 --- 10.0.0.1 ping statistics --- 00:23:09.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.489 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=730020 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 730020 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 730020 ']' 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:09.489 12:27:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.489 [2024-06-10 12:27:14.971490] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:09.489 [2024-06-10 12:27:14.971538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.489 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.489 [2024-06-10 12:27:15.035109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.489 [2024-06-10 12:27:15.089528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.489 [2024-06-10 12:27:15.089560] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.489 [2024-06-10 12:27:15.089566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.489 [2024-06-10 12:27:15.089571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.489 [2024-06-10 12:27:15.089575] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.489 [2024-06-10 12:27:15.089686] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.489 [2024-06-10 12:27:15.089846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.489 [2024-06-10 12:27:15.090006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.489 [2024-06-10 12:27:15.090008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.431 [2024-06-10 12:27:15.801474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.431 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.432 12:27:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.432 Malloc1 00:23:10.432 [2024-06-10 12:27:15.900111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.432 Malloc2 00:23:10.432 Malloc3 00:23:10.432 Malloc4 00:23:10.432 Malloc5 00:23:10.692 Malloc6 00:23:10.692 Malloc7 00:23:10.692 Malloc8 00:23:10.692 Malloc9 00:23:10.692 Malloc10 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=730398 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 730398 /var/tmp/bdevperf.sock 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 730398 ']' 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.692 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.692 { 00:23:10.692 "params": { 00:23:10.692 "name": "Nvme$subsystem", 00:23:10.692 "trtype": "$TEST_TRANSPORT", 00:23:10.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.692 "adrfam": "ipv4", 00:23:10.693 "trsvcid": "$NVMF_PORT", 00:23:10.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.693 "hdgst": ${hdgst:-false}, 00:23:10.693 "ddgst": ${ddgst:-false} 00:23:10.693 }, 00:23:10.693 "method": "bdev_nvme_attach_controller" 00:23:10.693 } 00:23:10.693 EOF 00:23:10.693 )") 00:23:10.693 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 [2024-06-10 12:27:16.339203] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:10.954 [2024-06-10 12:27:16.339254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730398 ] 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.954 "name": "Nvme$subsystem", 00:23:10.954 "trtype": "$TEST_TRANSPORT", 00:23:10.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.954 "adrfam": "ipv4", 00:23:10.954 "trsvcid": "$NVMF_PORT", 00:23:10.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.954 "hdgst": ${hdgst:-false}, 00:23:10.954 "ddgst": ${ddgst:-false} 00:23:10.954 }, 00:23:10.954 "method": "bdev_nvme_attach_controller" 00:23:10.954 } 00:23:10.954 EOF 00:23:10.954 )") 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.954 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.954 { 00:23:10.954 "params": { 00:23:10.955 "name": "Nvme$subsystem", 00:23:10.955 "trtype": "$TEST_TRANSPORT", 00:23:10.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "$NVMF_PORT", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.955 "hdgst": ${hdgst:-false}, 00:23:10.955 "ddgst": ${ddgst:-false} 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 } 00:23:10.955 EOF 00:23:10.955 )") 00:23:10.955 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.955 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:10.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.955 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:10.955 12:27:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme1", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme2", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme3", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme4", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme5", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme6", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme7", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme8", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme9", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 },{ 00:23:10.955 "params": { 00:23:10.955 "name": "Nvme10", 00:23:10.955 "trtype": "tcp", 00:23:10.955 "traddr": "10.0.0.2", 00:23:10.955 "adrfam": "ipv4", 00:23:10.955 "trsvcid": "4420", 00:23:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.955 "hdgst": false, 00:23:10.955 "ddgst": false 00:23:10.955 }, 00:23:10.955 "method": "bdev_nvme_attach_controller" 00:23:10.955 }' 00:23:10.955 [2024-06-10 12:27:16.405655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.955 [2024-06-10 12:27:16.470168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.348 Running I/O for 10 seconds... 00:23:12.348 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:12.348 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:12.348 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.348 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.348 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.607 12:27:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.607 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.607 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:12.607 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:12.607 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:12.867 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 730398 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 730398 ']' 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 730398 00:23:13.127 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 730398 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 730398' 00:23:13.128 killing process with pid 730398 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 730398 00:23:13.128 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 730398 00:23:13.437 Received shutdown signal, test time was about 0.960575 seconds 00:23:13.437 00:23:13.437 Latency(us) 00:23:13.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.437 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme1n1 : 0.94 205.24 12.83 0.00 0.00 308017.49 18786.99 251658.24 00:23:13.437 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme2n1 : 0.95 268.20 16.76 0.00 0.00 230841.81 18896.21 242920.11 00:23:13.437 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme3n1 : 0.95 270.33 16.90 0.00 0.00 224220.80 14308.69 242920.11 00:23:13.437 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme4n1 : 0.95 268.94 16.81 0.00 0.00 220568.96 19114.67 255153.49 00:23:13.437 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme5n1 : 0.94 204.06 12.75 0.00 0.00 284025.74 20316.16 256901.12 00:23:13.437 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme6n1 : 0.93 206.31 12.89 0.00 0.00 274069.90 19551.57 228939.09 00:23:13.437 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme7n1 : 0.92 214.28 13.39 0.00 0.00 255071.02 4068.69 251658.24 00:23:13.437 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme8n1 : 0.96 267.38 16.71 0.00 0.00 201930.24 20971.52 249910.61 00:23:13.437 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme9n1 : 0.96 266.75 16.67 0.00 0.00 198096.43 13544.11 251658.24 00:23:13.437 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.437 Verification LBA range: start 0x0 length 0x400 00:23:13.437 Nvme10n1 : 0.94 203.24 12.70 0.00 0.00 253128.53 18459.31 270882.13 00:23:13.437 =================================================================================================================== 00:23:13.437 Total : 2374.73 148.42 0.00 0.00 240768.91 4068.69 270882.13 00:23:13.437 12:27:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 730020 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.404 rmmod nvme_tcp 00:23:14.404 rmmod nvme_fabrics 00:23:14.404 rmmod nvme_keyring 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 730020 ']' 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 730020 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 730020 ']' 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 730020 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:14.404 12:27:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 730020 00:23:14.664 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:14.664 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:14.664 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 730020' 00:23:14.664 killing process with pid 730020 00:23:14.664 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 730020 00:23:14.664 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 730020 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.924 12:27:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.836 00:23:16.836 real 0m7.792s 00:23:16.836 user 0m23.500s 00:23:16.836 sys 0m1.159s 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:16.836 ************************************ 00:23:16.836 END TEST nvmf_shutdown_tc2 00:23:16.836 ************************************ 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.836 ************************************ 00:23:16.836 START TEST nvmf_shutdown_tc3 00:23:16.836 ************************************ 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.836 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.097 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:17.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:17.098 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:17.098 Found net devices under 0000:31:00.0: cvl_0_0 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:17.098 Found net devices under 0000:31:00.1: cvl_0_1 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.098 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:23:17.569 00:23:17.569 --- 10.0.0.2 ping statistics --- 00:23:17.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.569 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:17.569 00:23:17.569 --- 10.0.0.1 ping statistics --- 00:23:17.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.569 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=731666 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 731666 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 731666 ']' 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:17.569 12:27:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.569 [2024-06-10 12:27:22.880190] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:17.569 [2024-06-10 12:27:22.880246] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.569 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.569 [2024-06-10 12:27:22.961401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.569 [2024-06-10 12:27:23.016961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.569 [2024-06-10 12:27:23.016993] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.569 [2024-06-10 12:27:23.016998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.569 [2024-06-10 12:27:23.017003] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.569 [2024-06-10 12:27:23.017007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.569 [2024-06-10 12:27:23.017122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.569 [2024-06-10 12:27:23.017278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.569 [2024-06-10 12:27:23.017405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.569 [2024-06-10 12:27:23.017407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.138 [2024-06-10 12:27:23.696545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.138 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.398 12:27:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.398 Malloc1 00:23:18.398 [2024-06-10 12:27:23.794980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.398 Malloc2 00:23:18.398 Malloc3 00:23:18.398 Malloc4 00:23:18.398 Malloc5 00:23:18.398 Malloc6 00:23:18.398 Malloc7 00:23:18.660 Malloc8 00:23:18.660 Malloc9 00:23:18.660 Malloc10 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=731937 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 731937 /var/tmp/bdevperf.sock 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 731937 ']' 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 [2024-06-10 12:27:24.234681] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:18.660 [2024-06-10 12:27:24.234733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731937 ] 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.660 "params": { 00:23:18.660 "name": "Nvme$subsystem", 00:23:18.660 "trtype": "$TEST_TRANSPORT", 00:23:18.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.660 "adrfam": "ipv4", 00:23:18.660 "trsvcid": "$NVMF_PORT", 00:23:18.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.660 "hdgst": ${hdgst:-false}, 00:23:18.660 "ddgst": ${ddgst:-false} 00:23:18.660 }, 00:23:18.660 "method": "bdev_nvme_attach_controller" 00:23:18.660 } 00:23:18.660 EOF 00:23:18.660 )") 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.660 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.660 { 00:23:18.661 "params": { 00:23:18.661 "name": "Nvme$subsystem", 00:23:18.661 "trtype": "$TEST_TRANSPORT", 00:23:18.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.661 "adrfam": "ipv4", 00:23:18.661 "trsvcid": "$NVMF_PORT", 00:23:18.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.661 "hdgst": ${hdgst:-false}, 00:23:18.661 "ddgst": ${ddgst:-false} 00:23:18.661 }, 00:23:18.661 "method": "bdev_nvme_attach_controller" 00:23:18.661 } 00:23:18.661 EOF 00:23:18.661 )") 00:23:18.661 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.661 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.661 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.661 { 00:23:18.661 "params": { 00:23:18.661 "name": "Nvme$subsystem", 00:23:18.661 "trtype": "$TEST_TRANSPORT", 00:23:18.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.661 "adrfam": "ipv4", 00:23:18.661 "trsvcid": "$NVMF_PORT", 00:23:18.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.661 "hdgst": ${hdgst:-false}, 00:23:18.661 "ddgst": ${ddgst:-false} 00:23:18.661 }, 00:23:18.661 "method": "bdev_nvme_attach_controller" 00:23:18.661 } 00:23:18.661 EOF 00:23:18.661 )") 00:23:18.661 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.922 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:18.922 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:18.922 12:27:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme1", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme2", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme3", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme4", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme5", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme6", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme7", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme8", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme9", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 },{ 00:23:18.922 "params": { 00:23:18.922 "name": "Nvme10", 00:23:18.922 "trtype": "tcp", 00:23:18.922 "traddr": "10.0.0.2", 00:23:18.922 "adrfam": "ipv4", 00:23:18.922 "trsvcid": "4420", 00:23:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.922 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.922 "hdgst": false, 00:23:18.922 "ddgst": false 00:23:18.922 }, 00:23:18.922 "method": "bdev_nvme_attach_controller" 00:23:18.922 }' 00:23:18.922 [2024-06-10 12:27:24.301686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.922 [2024-06-10 12:27:24.367230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.305 Running I/O for 10 seconds... 00:23:20.305 12:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:20.305 12:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:20.305 12:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:20.305 12:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.305 12:27:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:20.565 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.825 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:20.826 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.086 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:21.363 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 731666 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 731666 ']' 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 731666 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 731666 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 731666' 00:23:21.364 killing process with pid 731666 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 731666 00:23:21.364 12:27:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 731666 00:23:21.364 [2024-06-10 12:27:26.783437] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7be30 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784478] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784501] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.364 [2024-06-10 12:27:26.784510] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784514] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784566] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.784575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200f390 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785671] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785782] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785791] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785800] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.365 [2024-06-10 12:27:26.785877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.785940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c2d0 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787246] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787259] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.366 [2024-06-10 12:27:26.787350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.787381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7c790 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788566] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788639] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788790] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788860] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788873] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.788878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d0d0 is same with the state(5) to be set 00:23:21.367 [2024-06-10 12:27:26.789155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.367 [2024-06-10 12:27:26.789187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.367 [2024-06-10 12:27:26.789205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.367 [2024-06-10 12:27:26.789213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.367 [2024-06-10 12:27:26.789221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.367 [2024-06-10 12:27:26.789233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.367 [2024-06-10 12:27:26.789241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ade0 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.789294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69e60 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.789387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb14c0 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.789469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a050 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.789560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6f60 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.789640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.368 [2024-06-10 12:27:26.789693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.368 [2024-06-10 12:27:26.789699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12300 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.368 [2024-06-10 12:27:26.790251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790356] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.790384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d570 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791380] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791393] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791416] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791420] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.369 [2024-06-10 12:27:26.791425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791429] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791438] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791456] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791464] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791473] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791482] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791509] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791514] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.791550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7da10 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792362] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792385] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792402] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792416] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792420] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792429] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.370 [2024-06-10 12:27:26.792434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792438] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ded0 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.792984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804791] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.804973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7e370 is same with the state(5) to be set 00:23:21.371 [2024-06-10 12:27:26.810877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.810919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.810936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.810952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.810968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.810984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.810992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.811001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.371 [2024-06-10 12:27:26.811008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.371 [2024-06-10 12:27:26.811017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.372 [2024-06-10 12:27:26.811659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.372 [2024-06-10 12:27:26.811666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.811943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.811970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.373 [2024-06-10 12:27:26.812016] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb94d30 was disconnected and freed. reset controller. 00:23:21.373 [2024-06-10 12:27:26.812603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.373 [2024-06-10 12:27:26.812863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.373 [2024-06-10 12:27:26.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.812987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.374 [2024-06-10 12:27:26.813481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.374 [2024-06-10 12:27:26.813489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.813668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.813688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.375 [2024-06-10 12:27:26.813728] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xadf270 was disconnected and freed. reset controller. 00:23:21.375 [2024-06-10 12:27:26.813951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0ade0 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.813990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.813999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc50dc0 is same with the state(5) to be set 00:23:21.375 [2024-06-10 12:27:26.814071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc69e60 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.814097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51a70 is same with the state(5) to be set 00:23:21.375 [2024-06-10 12:27:26.814181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb14c0 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.814202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a050 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.814222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ea610 is same with the state(5) to be set 00:23:21.375 [2024-06-10 12:27:26.814299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae6f60 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.814313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12300 (9): Bad file descriptor 00:23:21.375 [2024-06-10 12:27:26.814336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.375 [2024-06-10 12:27:26.814391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a920 is same with the state(5) to be set 00:23:21.375 [2024-06-10 12:27:26.814429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.375 [2024-06-10 12:27:26.814531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.375 [2024-06-10 12:27:26.814538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.814985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.814994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.376 [2024-06-10 12:27:26.815180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.376 [2024-06-10 12:27:26.815190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.815381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.821658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.821667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18ac0 is same with the state(5) to be set 00:23:21.377 [2024-06-10 12:27:26.821719] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc18ac0 was disconnected and freed. reset controller. 00:23:21.377 [2024-06-10 12:27:26.824506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.377 [2024-06-10 12:27:26.824539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.377 [2024-06-10 12:27:26.824556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a920 (9): Bad file descriptor 00:23:21.377 [2024-06-10 12:27:26.824607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc50dc0 (9): Bad file descriptor 00:23:21.377 [2024-06-10 12:27:26.824629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc51a70 (9): Bad file descriptor 00:23:21.377 [2024-06-10 12:27:26.824648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ea610 (9): Bad file descriptor 00:23:21.377 [2024-06-10 12:27:26.824660] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.377 [2024-06-10 12:27:26.826603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.377 [2024-06-10 12:27:26.826642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0a050 with addr=10.0.0.2, port=4420 00:23:21.377 [2024-06-10 12:27:26.826656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a050 is same with the state(5) to be set 00:23:21.377 [2024-06-10 12:27:26.826731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.377 [2024-06-10 12:27:26.826955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.377 [2024-06-10 12:27:26.826965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.826972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.826981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.826988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.826997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.378 [2024-06-10 12:27:26.827496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.378 [2024-06-10 12:27:26.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.827790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.827797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.379 [2024-06-10 12:27:26.829744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.379 [2024-06-10 12:27:26.829750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.829985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.829992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.380 [2024-06-10 12:27:26.830392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.380 [2024-06-10 12:27:26.830402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.830418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.830435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.830451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.830466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.830483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.830490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.381 [2024-06-10 12:27:26.832634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.381 [2024-06-10 12:27:26.832645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.832983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.832993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.833167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.382 [2024-06-10 12:27:26.833174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.382 [2024-06-10 12:27:26.834692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.382 [2024-06-10 12:27:26.834719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.382 [2024-06-10 12:27:26.834728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.382 [2024-06-10 12:27:26.835154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.382 [2024-06-10 12:27:26.835169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0a920 with addr=10.0.0.2, port=4420 00:23:21.382 [2024-06-10 12:27:26.835177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a920 is same with the state(5) to be set 00:23:21.382 [2024-06-10 12:27:26.835189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a050 (9): Bad file descriptor 00:23:21.383 [2024-06-10 12:27:26.835227] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.383 [2024-06-10 12:27:26.835256] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.383 [2024-06-10 12:27:26.835266] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.383 [2024-06-10 12:27:26.835569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.835987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.835996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.836003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.836013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.836020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.836029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.836036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.836045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.836052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.383 [2024-06-10 12:27:26.836061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.383 [2024-06-10 12:27:26.836068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.836625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.837922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.384 [2024-06-10 12:27:26.837935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.384 [2024-06-10 12:27:26.837949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.837956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.837969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.837976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.837989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.385 [2024-06-10 12:27:26.838727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.385 [2024-06-10 12:27:26.838734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.838984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.838991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.839180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.839187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae0790 is same with the state(5) to be set 00:23:21.386 [2024-06-10 12:27:26.840183] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xae0790 was disconnected and freed. reset controller. 00:23:21.386 [2024-06-10 12:27:26.840192] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.386 [2024-06-10 12:27:26.840224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.386 [2024-06-10 12:27:26.840446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.386 [2024-06-10 12:27:26.840454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.840982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.840989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.387 [2024-06-10 12:27:26.841184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.387 [2024-06-10 12:27:26.841191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.841483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.842443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae1cb0 is same with the state(5) to be set 00:23:21.388 [2024-06-10 12:27:26.842490] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xae1cb0 was disconnected and freed. reset controller. 00:23:21.388 [2024-06-10 12:27:26.842497] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.388 [2024-06-10 12:27:26.842542] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.388 [2024-06-10 12:27:26.842568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.388 [2024-06-10 12:27:26.842596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.388 [2024-06-10 12:27:26.843011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.388 [2024-06-10 12:27:26.843025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae6f60 with addr=10.0.0.2, port=4420 00:23:21.388 [2024-06-10 12:27:26.843032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6f60 is same with the state(5) to be set 00:23:21.388 [2024-06-10 12:27:26.843448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.388 [2024-06-10 12:27:26.843486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb14c0 with addr=10.0.0.2, port=4420 00:23:21.388 [2024-06-10 12:27:26.843497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb14c0 is same with the state(5) to be set 00:23:21.388 [2024-06-10 12:27:26.843890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.388 [2024-06-10 12:27:26.843901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0ade0 with addr=10.0.0.2, port=4420 00:23:21.388 [2024-06-10 12:27:26.843908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ade0 is same with the state(5) to be set 00:23:21.388 [2024-06-10 12:27:26.843921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a920 (9): Bad file descriptor 00:23:21.388 [2024-06-10 12:27:26.843931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.388 [2024-06-10 12:27:26.843939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.388 [2024-06-10 12:27:26.843948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.388 [2024-06-10 12:27:26.843970] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.388 [2024-06-10 12:27:26.844616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.388 [2024-06-10 12:27:26.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.388 [2024-06-10 12:27:26.844792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.844988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.844997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.389 [2024-06-10 12:27:26.845418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.389 [2024-06-10 12:27:26.845427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.390 [2024-06-10 12:27:26.845680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.390 [2024-06-10 12:27:26.845688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae3050 is same with the state(5) to be set 00:23:21.390 [2024-06-10 12:27:26.848319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.390 [2024-06-10 12:27:26.848340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.390 [2024-06-10 12:27:26.848352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.390 task offset: 29824 on job bdev=Nvme4n1 fails 00:23:21.390 00:23:21.390 Latency(us) 00:23:21.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme1n1 : 0.95 201.60 12.60 67.20 0.00 235368.32 14636.37 255153.49 00:23:21.390 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme2n1 : 0.96 139.19 8.70 66.98 0.00 300684.10 16274.77 244667.73 00:23:21.390 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme3n1 : 0.96 199.10 12.44 66.37 0.00 228598.51 10267.31 260396.37 00:23:21.390 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme4n1 : 0.95 202.17 12.64 67.39 0.00 220107.41 11414.19 251658.24 00:23:21.390 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme5n1 : 0.96 133.58 8.35 66.79 0.00 289997.65 20097.71 258648.75 00:23:21.390 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme6n1 : 0.95 201.92 12.62 67.31 0.00 210757.55 19005.44 248162.99 00:23:21.390 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme7n1 : 0.97 198.64 12.41 66.21 0.00 209699.41 20316.16 248162.99 00:23:21.390 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme8n1 ended in about 0.97 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme8n1 : 0.97 198.17 12.39 66.06 0.00 205421.87 18786.99 249910.61 00:23:21.390 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme9n1 : 0.97 131.50 8.22 65.75 0.00 269181.72 18786.99 281367.89 00:23:21.390 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.390 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:21.390 Verification LBA range: start 0x0 length 0x400 00:23:21.390 Nvme10n1 : 0.96 133.21 8.33 66.60 0.00 258596.12 17913.17 256901.12 00:23:21.390 =================================================================================================================== 00:23:21.390 Total : 1739.08 108.69 666.65 0.00 238889.41 10267.31 281367.89 00:23:21.390 [2024-06-10 12:27:26.874992] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.390 [2024-06-10 12:27:26.875027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.390 [2024-06-10 12:27:26.875438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.390 [2024-06-10 12:27:26.875455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc69e60 with addr=10.0.0.2, port=4420 00:23:21.390 [2024-06-10 12:27:26.875465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69e60 is same with the state(5) to be set 00:23:21.390 [2024-06-10 12:27:26.875686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.390 [2024-06-10 12:27:26.875695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb12300 with addr=10.0.0.2, port=4420 00:23:21.390 [2024-06-10 12:27:26.875702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12300 is same with the state(5) to be set 00:23:21.390 [2024-06-10 12:27:26.875714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae6f60 (9): Bad file descriptor 00:23:21.390 [2024-06-10 12:27:26.875725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb14c0 (9): Bad file descriptor 00:23:21.390 [2024-06-10 12:27:26.875734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0ade0 (9): Bad file descriptor 00:23:21.390 [2024-06-10 12:27:26.875743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.390 [2024-06-10 12:27:26.875750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.390 [2024-06-10 12:27:26.875758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.390 [2024-06-10 12:27:26.875787] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.390 [2024-06-10 12:27:26.875800] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.390 [2024-06-10 12:27:26.875810] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.390 [2024-06-10 12:27:26.876685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.390 [2024-06-10 12:27:26.877038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.390 [2024-06-10 12:27:26.877049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ea610 with addr=10.0.0.2, port=4420 00:23:21.390 [2024-06-10 12:27:26.877057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ea610 is same with the state(5) to be set 00:23:21.390 [2024-06-10 12:27:26.877236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.390 [2024-06-10 12:27:26.877246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc51a70 with addr=10.0.0.2, port=4420 00:23:21.390 [2024-06-10 12:27:26.877253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51a70 is same with the state(5) to be set 00:23:21.391 [2024-06-10 12:27:26.877478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.391 [2024-06-10 12:27:26.877487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc50dc0 with addr=10.0.0.2, port=4420 00:23:21.391 [2024-06-10 12:27:26.877494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc50dc0 is same with the state(5) to be set 00:23:21.391 [2024-06-10 12:27:26.877503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc69e60 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.877513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12300 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.877521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.877527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.877534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.391 [2024-06-10 12:27:26.877547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.877554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.877560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.391 [2024-06-10 12:27:26.877570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.877576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.877583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.391 [2024-06-10 12:27:26.877607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877618] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877628] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877638] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877647] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877657] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.391 [2024-06-10 12:27:26.877969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.391 [2024-06-10 12:27:26.877991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.877998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ea610 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.878033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc51a70 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.878042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc50dc0 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.878050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.391 [2024-06-10 12:27:26.878150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.391 [2024-06-10 12:27:26.878521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0a050 with addr=10.0.0.2, port=4420 00:23:21.391 [2024-06-10 12:27:26.878528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a050 is same with the state(5) to be set 00:23:21.391 [2024-06-10 12:27:26.878535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.391 [2024-06-10 12:27:26.878848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0a920 with addr=10.0.0.2, port=4420 00:23:21.391 [2024-06-10 12:27:26.878855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a920 is same with the state(5) to be set 00:23:21.391 [2024-06-10 12:27:26.878868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a050 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.878896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a920 (9): Bad file descriptor 00:23:21.391 [2024-06-10 12:27:26.878905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.391 [2024-06-10 12:27:26.878954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.391 [2024-06-10 12:27:26.878960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.391 [2024-06-10 12:27:26.878967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.391 [2024-06-10 12:27:26.878994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.652 12:27:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:21.652 12:27:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 731937 00:23:22.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (731937) - No such process 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.593 rmmod nvme_tcp 00:23:22.593 rmmod nvme_fabrics 00:23:22.593 rmmod nvme_keyring 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.593 12:27:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.137 00:23:25.137 real 0m7.788s 00:23:25.137 user 0m19.027s 00:23:25.137 sys 0m1.186s 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 ************************************ 00:23:25.137 END TEST nvmf_shutdown_tc3 00:23:25.137 ************************************ 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:25.137 00:23:25.137 real 0m32.815s 00:23:25.137 user 1m15.812s 00:23:25.137 sys 0m9.523s 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:25.137 12:27:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 ************************************ 00:23:25.137 END TEST nvmf_shutdown 00:23:25.137 ************************************ 00:23:25.137 12:27:30 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 12:27:30 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 12:27:30 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:25.137 12:27:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:25.137 12:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 ************************************ 00:23:25.137 START TEST nvmf_multicontroller 00:23:25.137 ************************************ 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.137 * Looking for test storage... 00:23:25.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.137 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.138 12:27:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.279 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:33.280 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:33.280 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:33.280 Found net devices under 0000:31:00.0: cvl_0_0 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:33.280 Found net devices under 0000:31:00.1: cvl_0_1 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:23:33.280 00:23:33.280 --- 10.0.0.2 ping statistics --- 00:23:33.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.280 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:23:33.280 00:23:33.280 --- 10.0.0.1 ping statistics --- 00:23:33.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.280 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=737474 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 737474 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 737474 ']' 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:33.280 12:27:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.281 12:27:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.281 [2024-06-10 12:27:38.861837] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:33.281 [2024-06-10 12:27:38.861902] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.541 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.541 [2024-06-10 12:27:38.957074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.541 [2024-06-10 12:27:39.051582] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.541 [2024-06-10 12:27:39.051642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.541 [2024-06-10 12:27:39.051650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.541 [2024-06-10 12:27:39.051657] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.541 [2024-06-10 12:27:39.051663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.541 [2024-06-10 12:27:39.051794] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.541 [2024-06-10 12:27:39.051958] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.541 [2024-06-10 12:27:39.051958] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.163 [2024-06-10 12:27:39.678758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.163 Malloc0 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.163 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.164 [2024-06-10 12:27:39.747463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.164 [2024-06-10 12:27:39.759364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.164 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 Malloc1 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=737697 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 737697 /var/tmp/bdevperf.sock 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 737697 ']' 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:34.425 12:27:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 NVMe0n1 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.370 1 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 request: 00:23:35.370 { 00:23:35.370 "name": "NVMe0", 00:23:35.370 "trtype": "tcp", 00:23:35.370 "traddr": "10.0.0.2", 00:23:35.370 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:35.370 "hostaddr": "10.0.0.2", 00:23:35.370 "hostsvcid": "60000", 00:23:35.370 "adrfam": "ipv4", 00:23:35.370 "trsvcid": "4420", 00:23:35.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.370 "method": "bdev_nvme_attach_controller", 00:23:35.370 "req_id": 1 00:23:35.370 } 00:23:35.370 Got JSON-RPC error response 00:23:35.370 response: 00:23:35.370 { 00:23:35.370 "code": -114, 00:23:35.370 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:35.370 } 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 request: 00:23:35.370 { 00:23:35.370 "name": "NVMe0", 00:23:35.370 "trtype": "tcp", 00:23:35.370 "traddr": "10.0.0.2", 00:23:35.370 "hostaddr": "10.0.0.2", 00:23:35.370 "hostsvcid": "60000", 00:23:35.370 "adrfam": "ipv4", 00:23:35.370 "trsvcid": "4420", 00:23:35.370 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.370 "method": "bdev_nvme_attach_controller", 00:23:35.370 "req_id": 1 00:23:35.370 } 00:23:35.370 Got JSON-RPC error response 00:23:35.370 response: 00:23:35.370 { 00:23:35.370 "code": -114, 00:23:35.370 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:35.370 } 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.370 request: 00:23:35.370 { 00:23:35.370 "name": "NVMe0", 00:23:35.370 "trtype": "tcp", 00:23:35.370 "traddr": "10.0.0.2", 00:23:35.370 "hostaddr": "10.0.0.2", 00:23:35.370 "hostsvcid": "60000", 00:23:35.370 "adrfam": "ipv4", 00:23:35.370 "trsvcid": "4420", 00:23:35.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.370 "multipath": "disable", 00:23:35.370 "method": "bdev_nvme_attach_controller", 00:23:35.370 "req_id": 1 00:23:35.370 } 00:23:35.370 Got JSON-RPC error response 00:23:35.370 response: 00:23:35.370 { 00:23:35.370 "code": -114, 00:23:35.370 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:35.370 } 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:35.370 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.371 request: 00:23:35.371 { 00:23:35.371 "name": "NVMe0", 00:23:35.371 "trtype": "tcp", 00:23:35.371 "traddr": "10.0.0.2", 00:23:35.371 "hostaddr": "10.0.0.2", 00:23:35.371 "hostsvcid": "60000", 00:23:35.371 "adrfam": "ipv4", 00:23:35.371 "trsvcid": "4420", 00:23:35.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.371 "multipath": "failover", 00:23:35.371 "method": "bdev_nvme_attach_controller", 00:23:35.371 "req_id": 1 00:23:35.371 } 00:23:35.371 Got JSON-RPC error response 00:23:35.371 response: 00:23:35.371 { 00:23:35.371 "code": -114, 00:23:35.371 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:35.371 } 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.371 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.633 00:23:35.633 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.633 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.634 12:27:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.634 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:35.634 12:27:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.022 0 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 737697 ']' 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 737697' 00:23:37.022 killing process with pid 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 737697 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:23:37.022 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:23:37.022 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:37.022 [2024-06-10 12:27:39.878332] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:37.022 [2024-06-10 12:27:39.878388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737697 ] 00:23:37.022 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.022 [2024-06-10 12:27:39.934413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.022 [2024-06-10 12:27:39.988274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.022 [2024-06-10 12:27:41.182555] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 83197e27-ed56-45cb-ae39-3274679e0ac7 already exists 00:23:37.022 [2024-06-10 12:27:41.182586] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:83197e27-ed56-45cb-ae39-3274679e0ac7 alias for bdev NVMe1n1 00:23:37.022 [2024-06-10 12:27:41.182595] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:37.022 Running I/O for 1 seconds... 00:23:37.022 00:23:37.022 Latency(us) 00:23:37.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.022 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:37.022 NVMe0n1 : 1.00 23101.65 90.24 0.00 0.00 5528.85 2116.27 11741.87 00:23:37.022 =================================================================================================================== 00:23:37.022 Total : 23101.65 90.24 0.00 0.00 5528.85 2116.27 11741.87 00:23:37.022 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.022 00:23:37.022 Latency(us) 00:23:37.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.022 =================================================================================================================== 00:23:37.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.022 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.023 rmmod nvme_tcp 00:23:37.023 rmmod nvme_fabrics 00:23:37.023 rmmod nvme_keyring 00:23:37.023 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 737474 ']' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 737474 ']' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 737474' 00:23:37.284 killing process with pid 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 737474 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.284 12:27:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.832 12:27:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.832 00:23:39.832 real 0m14.504s 00:23:39.832 user 0m16.625s 00:23:39.832 sys 0m6.883s 00:23:39.832 12:27:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:39.832 12:27:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.832 ************************************ 00:23:39.832 END TEST nvmf_multicontroller 00:23:39.832 ************************************ 00:23:39.832 12:27:44 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.832 12:27:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:39.832 12:27:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:39.832 12:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.832 ************************************ 00:23:39.832 START TEST nvmf_aer 00:23:39.832 ************************************ 00:23:39.832 12:27:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.832 * Looking for test storage... 00:23:39.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.832 12:27:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:47.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.976 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:47.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:47.977 Found net devices under 0000:31:00.0: cvl_0_0 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:47.977 Found net devices under 0000:31:00.1: cvl_0_1 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:23:47.977 00:23:47.977 --- 10.0.0.2 ping statistics --- 00:23:47.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.977 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:47.977 00:23:47.977 --- 10.0.0.1 ping statistics --- 00:23:47.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.977 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.977 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=743045 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 743045 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 743045 ']' 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:47.978 12:27:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.978 [2024-06-10 12:27:53.532245] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:23:47.978 [2024-06-10 12:27:53.532334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.978 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.239 [2024-06-10 12:27:53.611956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.239 [2024-06-10 12:27:53.686983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.239 [2024-06-10 12:27:53.687021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.239 [2024-06-10 12:27:53.687029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.239 [2024-06-10 12:27:53.687036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.239 [2024-06-10 12:27:53.687041] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.239 [2024-06-10 12:27:53.687182] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.239 [2024-06-10 12:27:53.687306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.239 [2024-06-10 12:27:53.687359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.239 [2024-06-10 12:27:53.687359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.812 [2024-06-10 12:27:54.348734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.812 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.813 Malloc0 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.813 [2024-06-10 12:27:54.407971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.813 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.074 [ 00:23:49.074 { 00:23:49.074 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.074 "subtype": "Discovery", 00:23:49.074 "listen_addresses": [], 00:23:49.075 "allow_any_host": true, 00:23:49.075 "hosts": [] 00:23:49.075 }, 00:23:49.075 { 00:23:49.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.075 "subtype": "NVMe", 00:23:49.075 "listen_addresses": [ 00:23:49.075 { 00:23:49.075 "trtype": "TCP", 00:23:49.075 "adrfam": "IPv4", 00:23:49.075 "traddr": "10.0.0.2", 00:23:49.075 "trsvcid": "4420" 00:23:49.075 } 00:23:49.075 ], 00:23:49.075 "allow_any_host": true, 00:23:49.075 "hosts": [], 00:23:49.075 "serial_number": "SPDK00000000000001", 00:23:49.075 "model_number": "SPDK bdev Controller", 00:23:49.075 "max_namespaces": 2, 00:23:49.075 "min_cntlid": 1, 00:23:49.075 "max_cntlid": 65519, 00:23:49.075 "namespaces": [ 00:23:49.075 { 00:23:49.075 "nsid": 1, 00:23:49.075 "bdev_name": "Malloc0", 00:23:49.075 "name": "Malloc0", 00:23:49.075 "nguid": "075CDEAAC3C64763BDA36AA1882AB37C", 00:23:49.075 "uuid": "075cdeaa-c3c6-4763-bda3-6aa1882ab37c" 00:23:49.075 } 00:23:49.075 ] 00:23:49.075 } 00:23:49.075 ] 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=743085 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:49.075 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.075 Malloc1 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.075 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.337 Asynchronous Event Request test 00:23:49.337 Attaching to 10.0.0.2 00:23:49.337 Attached to 10.0.0.2 00:23:49.337 Registering asynchronous event callbacks... 00:23:49.337 Starting namespace attribute notice tests for all controllers... 00:23:49.337 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:49.337 aer_cb - Changed Namespace 00:23:49.337 Cleaning up... 00:23:49.337 [ 00:23:49.337 { 00:23:49.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.337 "subtype": "Discovery", 00:23:49.337 "listen_addresses": [], 00:23:49.337 "allow_any_host": true, 00:23:49.337 "hosts": [] 00:23:49.337 }, 00:23:49.337 { 00:23:49.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.337 "subtype": "NVMe", 00:23:49.337 "listen_addresses": [ 00:23:49.337 { 00:23:49.337 "trtype": "TCP", 00:23:49.337 "adrfam": "IPv4", 00:23:49.337 "traddr": "10.0.0.2", 00:23:49.337 "trsvcid": "4420" 00:23:49.337 } 00:23:49.337 ], 00:23:49.337 "allow_any_host": true, 00:23:49.337 "hosts": [], 00:23:49.337 "serial_number": "SPDK00000000000001", 00:23:49.337 "model_number": "SPDK bdev Controller", 00:23:49.337 "max_namespaces": 2, 00:23:49.337 "min_cntlid": 1, 00:23:49.337 "max_cntlid": 65519, 00:23:49.337 "namespaces": [ 00:23:49.337 { 00:23:49.337 "nsid": 1, 00:23:49.337 "bdev_name": "Malloc0", 00:23:49.337 "name": "Malloc0", 00:23:49.337 "nguid": "075CDEAAC3C64763BDA36AA1882AB37C", 00:23:49.337 "uuid": "075cdeaa-c3c6-4763-bda3-6aa1882ab37c" 00:23:49.337 }, 00:23:49.337 { 00:23:49.337 "nsid": 2, 00:23:49.337 "bdev_name": "Malloc1", 00:23:49.337 "name": "Malloc1", 00:23:49.337 "nguid": "B41588B1737145408DA292912077404E", 00:23:49.337 "uuid": "b41588b1-7371-4540-8da2-92912077404e" 00:23:49.337 } 00:23:49.337 ] 00:23:49.337 } 00:23:49.337 ] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 743085 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.337 rmmod nvme_tcp 00:23:49.337 rmmod nvme_fabrics 00:23:49.337 rmmod nvme_keyring 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 743045 ']' 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 743045 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 743045 ']' 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 743045 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 743045 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 743045' 00:23:49.337 killing process with pid 743045 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 743045 00:23:49.337 12:27:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 743045 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.599 12:27:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.515 12:27:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.515 00:23:51.515 real 0m12.118s 00:23:51.515 user 0m7.692s 00:23:51.515 sys 0m6.654s 00:23:51.515 12:27:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:51.515 12:27:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:51.515 ************************************ 00:23:51.515 END TEST nvmf_aer 00:23:51.515 ************************************ 00:23:51.777 12:27:57 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:51.777 12:27:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:51.777 12:27:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:51.777 12:27:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.777 ************************************ 00:23:51.777 START TEST nvmf_async_init 00:23:51.777 ************************************ 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:51.777 * Looking for test storage... 00:23:51.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:51.777 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=df81383de42a473ea4856a1b038745bc 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.778 12:27:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:59.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:59.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.923 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:59.924 Found net devices under 0000:31:00.0: cvl_0_0 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:59.924 Found net devices under 0000:31:00.1: cvl_0_1 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:23:59.924 00:23:59.924 --- 10.0.0.2 ping statistics --- 00:23:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.924 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:23:59.924 00:23:59.924 --- 10.0.0.1 ping statistics --- 00:23:59.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.924 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.924 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=747929 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 747929 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 747929 ']' 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:00.185 12:28:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.185 [2024-06-10 12:28:05.622096] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:00.185 [2024-06-10 12:28:05.622166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.185 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.185 [2024-06-10 12:28:05.699997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.185 [2024-06-10 12:28:05.773860] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.185 [2024-06-10 12:28:05.773897] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.185 [2024-06-10 12:28:05.773904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.185 [2024-06-10 12:28:05.773911] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.185 [2024-06-10 12:28:05.773916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.185 [2024-06-10 12:28:05.773940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 [2024-06-10 12:28:06.428839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 null0 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g df81383de42a473ea4856a1b038745bc 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 [2024-06-10 12:28:06.485082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.127 nvme0n1 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.127 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.425 [ 00:24:01.425 { 00:24:01.425 "name": "nvme0n1", 00:24:01.425 "aliases": [ 00:24:01.425 "df81383d-e42a-473e-a485-6a1b038745bc" 00:24:01.425 ], 00:24:01.425 "product_name": "NVMe disk", 00:24:01.425 "block_size": 512, 00:24:01.425 "num_blocks": 2097152, 00:24:01.425 "uuid": "df81383d-e42a-473e-a485-6a1b038745bc", 00:24:01.425 "assigned_rate_limits": { 00:24:01.425 "rw_ios_per_sec": 0, 00:24:01.425 "rw_mbytes_per_sec": 0, 00:24:01.425 "r_mbytes_per_sec": 0, 00:24:01.425 "w_mbytes_per_sec": 0 00:24:01.425 }, 00:24:01.425 "claimed": false, 00:24:01.425 "zoned": false, 00:24:01.425 "supported_io_types": { 00:24:01.425 "read": true, 00:24:01.425 "write": true, 00:24:01.425 "unmap": false, 00:24:01.425 "write_zeroes": true, 00:24:01.425 "flush": true, 00:24:01.425 "reset": true, 00:24:01.425 "compare": true, 00:24:01.425 "compare_and_write": true, 00:24:01.425 "abort": true, 00:24:01.425 "nvme_admin": true, 00:24:01.425 "nvme_io": true 00:24:01.425 }, 00:24:01.425 "memory_domains": [ 00:24:01.425 { 00:24:01.425 "dma_device_id": "system", 00:24:01.425 "dma_device_type": 1 00:24:01.425 } 00:24:01.425 ], 00:24:01.425 "driver_specific": { 00:24:01.425 "nvme": [ 00:24:01.425 { 00:24:01.425 "trid": { 00:24:01.425 "trtype": "TCP", 00:24:01.425 "adrfam": "IPv4", 00:24:01.425 "traddr": "10.0.0.2", 00:24:01.425 "trsvcid": "4420", 00:24:01.425 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.425 }, 00:24:01.425 "ctrlr_data": { 00:24:01.425 "cntlid": 1, 00:24:01.425 "vendor_id": "0x8086", 00:24:01.425 "model_number": "SPDK bdev Controller", 00:24:01.425 "serial_number": "00000000000000000000", 00:24:01.425 "firmware_revision": "24.09", 00:24:01.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.425 "oacs": { 00:24:01.425 "security": 0, 00:24:01.425 "format": 0, 00:24:01.425 "firmware": 0, 00:24:01.425 "ns_manage": 0 00:24:01.426 }, 00:24:01.426 "multi_ctrlr": true, 00:24:01.426 "ana_reporting": false 00:24:01.426 }, 00:24:01.426 "vs": { 00:24:01.426 "nvme_version": "1.3" 00:24:01.426 }, 00:24:01.426 "ns_data": { 00:24:01.426 "id": 1, 00:24:01.426 "can_share": true 00:24:01.426 } 00:24:01.426 } 00:24:01.426 ], 00:24:01.426 "mp_policy": "active_passive" 00:24:01.426 } 00:24:01.426 } 00:24:01.426 ] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 [2024-06-10 12:28:06.749671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:01.426 [2024-06-10 12:28:06.749735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e91400 (9): Bad file descriptor 00:24:01.426 [2024-06-10 12:28:06.881288] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 [ 00:24:01.426 { 00:24:01.426 "name": "nvme0n1", 00:24:01.426 "aliases": [ 00:24:01.426 "df81383d-e42a-473e-a485-6a1b038745bc" 00:24:01.426 ], 00:24:01.426 "product_name": "NVMe disk", 00:24:01.426 "block_size": 512, 00:24:01.426 "num_blocks": 2097152, 00:24:01.426 "uuid": "df81383d-e42a-473e-a485-6a1b038745bc", 00:24:01.426 "assigned_rate_limits": { 00:24:01.426 "rw_ios_per_sec": 0, 00:24:01.426 "rw_mbytes_per_sec": 0, 00:24:01.426 "r_mbytes_per_sec": 0, 00:24:01.426 "w_mbytes_per_sec": 0 00:24:01.426 }, 00:24:01.426 "claimed": false, 00:24:01.426 "zoned": false, 00:24:01.426 "supported_io_types": { 00:24:01.426 "read": true, 00:24:01.426 "write": true, 00:24:01.426 "unmap": false, 00:24:01.426 "write_zeroes": true, 00:24:01.426 "flush": true, 00:24:01.426 "reset": true, 00:24:01.426 "compare": true, 00:24:01.426 "compare_and_write": true, 00:24:01.426 "abort": true, 00:24:01.426 "nvme_admin": true, 00:24:01.426 "nvme_io": true 00:24:01.426 }, 00:24:01.426 "memory_domains": [ 00:24:01.426 { 00:24:01.426 "dma_device_id": "system", 00:24:01.426 "dma_device_type": 1 00:24:01.426 } 00:24:01.426 ], 00:24:01.426 "driver_specific": { 00:24:01.426 "nvme": [ 00:24:01.426 { 00:24:01.426 "trid": { 00:24:01.426 "trtype": "TCP", 00:24:01.426 "adrfam": "IPv4", 00:24:01.426 "traddr": "10.0.0.2", 00:24:01.426 "trsvcid": "4420", 00:24:01.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.426 }, 00:24:01.426 "ctrlr_data": { 00:24:01.426 "cntlid": 2, 00:24:01.426 "vendor_id": "0x8086", 00:24:01.426 "model_number": "SPDK bdev Controller", 00:24:01.426 "serial_number": "00000000000000000000", 00:24:01.426 "firmware_revision": "24.09", 00:24:01.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.426 "oacs": { 00:24:01.426 "security": 0, 00:24:01.426 "format": 0, 00:24:01.426 "firmware": 0, 00:24:01.426 "ns_manage": 0 00:24:01.426 }, 00:24:01.426 "multi_ctrlr": true, 00:24:01.426 "ana_reporting": false 00:24:01.426 }, 00:24:01.426 "vs": { 00:24:01.426 "nvme_version": "1.3" 00:24:01.426 }, 00:24:01.426 "ns_data": { 00:24:01.426 "id": 1, 00:24:01.426 "can_share": true 00:24:01.426 } 00:24:01.426 } 00:24:01.426 ], 00:24:01.426 "mp_policy": "active_passive" 00:24:01.426 } 00:24:01.426 } 00:24:01.426 ] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2sncf3KhNm 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2sncf3KhNm 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 [2024-06-10 12:28:06.950309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.426 [2024-06-10 12:28:06.950430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2sncf3KhNm 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 [2024-06-10 12:28:06.962330] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2sncf3KhNm 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.426 12:28:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 [2024-06-10 12:28:06.974364] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.426 [2024-06-10 12:28:06.974402] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:01.690 nvme0n1 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.690 [ 00:24:01.690 { 00:24:01.690 "name": "nvme0n1", 00:24:01.690 "aliases": [ 00:24:01.690 "df81383d-e42a-473e-a485-6a1b038745bc" 00:24:01.690 ], 00:24:01.690 "product_name": "NVMe disk", 00:24:01.690 "block_size": 512, 00:24:01.690 "num_blocks": 2097152, 00:24:01.690 "uuid": "df81383d-e42a-473e-a485-6a1b038745bc", 00:24:01.690 "assigned_rate_limits": { 00:24:01.690 "rw_ios_per_sec": 0, 00:24:01.690 "rw_mbytes_per_sec": 0, 00:24:01.690 "r_mbytes_per_sec": 0, 00:24:01.690 "w_mbytes_per_sec": 0 00:24:01.690 }, 00:24:01.690 "claimed": false, 00:24:01.690 "zoned": false, 00:24:01.690 "supported_io_types": { 00:24:01.690 "read": true, 00:24:01.690 "write": true, 00:24:01.690 "unmap": false, 00:24:01.690 "write_zeroes": true, 00:24:01.690 "flush": true, 00:24:01.690 "reset": true, 00:24:01.690 "compare": true, 00:24:01.690 "compare_and_write": true, 00:24:01.690 "abort": true, 00:24:01.690 "nvme_admin": true, 00:24:01.690 "nvme_io": true 00:24:01.690 }, 00:24:01.690 "memory_domains": [ 00:24:01.690 { 00:24:01.690 "dma_device_id": "system", 00:24:01.690 "dma_device_type": 1 00:24:01.690 } 00:24:01.690 ], 00:24:01.690 "driver_specific": { 00:24:01.690 "nvme": [ 00:24:01.690 { 00:24:01.690 "trid": { 00:24:01.690 "trtype": "TCP", 00:24:01.690 "adrfam": "IPv4", 00:24:01.690 "traddr": "10.0.0.2", 00:24:01.690 "trsvcid": "4421", 00:24:01.690 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.690 }, 00:24:01.690 "ctrlr_data": { 00:24:01.690 "cntlid": 3, 00:24:01.690 "vendor_id": "0x8086", 00:24:01.690 "model_number": "SPDK bdev Controller", 00:24:01.690 "serial_number": "00000000000000000000", 00:24:01.690 "firmware_revision": "24.09", 00:24:01.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.690 "oacs": { 00:24:01.690 "security": 0, 00:24:01.690 "format": 0, 00:24:01.690 "firmware": 0, 00:24:01.690 "ns_manage": 0 00:24:01.690 }, 00:24:01.690 "multi_ctrlr": true, 00:24:01.690 "ana_reporting": false 00:24:01.690 }, 00:24:01.690 "vs": { 00:24:01.690 "nvme_version": "1.3" 00:24:01.690 }, 00:24:01.690 "ns_data": { 00:24:01.690 "id": 1, 00:24:01.690 "can_share": true 00:24:01.690 } 00:24:01.690 } 00:24:01.690 ], 00:24:01.690 "mp_policy": "active_passive" 00:24:01.690 } 00:24:01.690 } 00:24:01.690 ] 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2sncf3KhNm 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.690 rmmod nvme_tcp 00:24:01.690 rmmod nvme_fabrics 00:24:01.690 rmmod nvme_keyring 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 747929 ']' 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 747929 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 747929 ']' 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 747929 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 747929 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 747929' 00:24:01.690 killing process with pid 747929 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 747929 00:24:01.690 [2024-06-10 12:28:07.216870] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:01.690 [2024-06-10 12:28:07.216897] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:01.690 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 747929 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.951 12:28:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.952 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.952 12:28:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.863 12:28:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.863 00:24:03.863 real 0m12.250s 00:24:03.863 user 0m4.236s 00:24:03.863 sys 0m6.474s 00:24:03.863 12:28:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:03.863 12:28:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.863 ************************************ 00:24:03.863 END TEST nvmf_async_init 00:24:03.863 ************************************ 00:24:03.863 12:28:09 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:03.863 12:28:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:03.863 12:28:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:03.863 12:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 ************************************ 00:24:04.124 START TEST dma 00:24:04.124 ************************************ 00:24:04.124 12:28:09 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:04.124 * Looking for test storage... 00:24:04.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.124 12:28:09 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.124 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.125 12:28:09 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.125 12:28:09 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.125 12:28:09 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.125 12:28:09 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.125 12:28:09 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.125 12:28:09 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.125 12:28:09 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:04.125 12:28:09 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.125 12:28:09 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.125 12:28:09 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:04.125 12:28:09 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:04.125 00:24:04.125 real 0m0.128s 00:24:04.125 user 0m0.056s 00:24:04.125 sys 0m0.076s 00:24:04.125 12:28:09 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:04.125 12:28:09 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:04.125 ************************************ 00:24:04.125 END TEST dma 00:24:04.125 ************************************ 00:24:04.125 12:28:09 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:04.125 12:28:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:04.125 12:28:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:04.125 12:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.125 ************************************ 00:24:04.125 START TEST nvmf_identify 00:24:04.125 ************************************ 00:24:04.125 12:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:04.386 * Looking for test storage... 00:24:04.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.386 12:28:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.387 12:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.526 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:12.527 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:12.527 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:12.527 Found net devices under 0000:31:00.0: cvl_0_0 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:12.527 Found net devices under 0000:31:00.1: cvl_0_1 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:24:12.527 00:24:12.527 --- 10.0.0.2 ping statistics --- 00:24:12.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.527 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:12.527 00:24:12.527 --- 10.0.0.1 ping statistics --- 00:24:12.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.527 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=752856 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 752856 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 752856 ']' 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:12.527 12:28:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.527 [2024-06-10 12:28:17.652288] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:12.528 [2024-06-10 12:28:17.652340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.528 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.528 [2024-06-10 12:28:17.730034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.528 [2024-06-10 12:28:17.803158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.528 [2024-06-10 12:28:17.803202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.528 [2024-06-10 12:28:17.803210] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.528 [2024-06-10 12:28:17.803217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.528 [2024-06-10 12:28:17.803222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.528 [2024-06-10 12:28:17.803302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.528 [2024-06-10 12:28:17.803548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.528 [2024-06-10 12:28:17.803705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.528 [2024-06-10 12:28:17.803705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 [2024-06-10 12:28:18.442562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 Malloc0 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 [2024-06-10 12:28:18.538116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.101 [ 00:24:13.101 { 00:24:13.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.101 "subtype": "Discovery", 00:24:13.101 "listen_addresses": [ 00:24:13.101 { 00:24:13.101 "trtype": "TCP", 00:24:13.101 "adrfam": "IPv4", 00:24:13.101 "traddr": "10.0.0.2", 00:24:13.101 "trsvcid": "4420" 00:24:13.101 } 00:24:13.101 ], 00:24:13.101 "allow_any_host": true, 00:24:13.101 "hosts": [] 00:24:13.101 }, 00:24:13.101 { 00:24:13.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.101 "subtype": "NVMe", 00:24:13.101 "listen_addresses": [ 00:24:13.101 { 00:24:13.101 "trtype": "TCP", 00:24:13.101 "adrfam": "IPv4", 00:24:13.101 "traddr": "10.0.0.2", 00:24:13.101 "trsvcid": "4420" 00:24:13.101 } 00:24:13.101 ], 00:24:13.101 "allow_any_host": true, 00:24:13.101 "hosts": [], 00:24:13.101 "serial_number": "SPDK00000000000001", 00:24:13.101 "model_number": "SPDK bdev Controller", 00:24:13.101 "max_namespaces": 32, 00:24:13.101 "min_cntlid": 1, 00:24:13.101 "max_cntlid": 65519, 00:24:13.101 "namespaces": [ 00:24:13.101 { 00:24:13.101 "nsid": 1, 00:24:13.101 "bdev_name": "Malloc0", 00:24:13.101 "name": "Malloc0", 00:24:13.101 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:13.101 "eui64": "ABCDEF0123456789", 00:24:13.101 "uuid": "568d6cf9-21bd-4106-b91f-f314c81ee029" 00:24:13.101 } 00:24:13.101 ] 00:24:13.101 } 00:24:13.101 ] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.101 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:13.101 [2024-06-10 12:28:18.598932] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:13.101 [2024-06-10 12:28:18.598973] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753182 ] 00:24:13.101 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.101 [2024-06-10 12:28:18.627341] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:13.101 [2024-06-10 12:28:18.627377] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:13.101 [2024-06-10 12:28:18.627381] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:13.101 [2024-06-10 12:28:18.627391] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:13.101 [2024-06-10 12:28:18.627398] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:13.101 [2024-06-10 12:28:18.627687] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:13.101 [2024-06-10 12:28:18.627712] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x198eec0 0 00:24:13.101 [2024-06-10 12:28:18.641200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:13.101 [2024-06-10 12:28:18.641208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:13.101 [2024-06-10 12:28:18.641212] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:13.101 [2024-06-10 12:28:18.641214] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:13.101 [2024-06-10 12:28:18.641242] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.101 [2024-06-10 12:28:18.641247] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.101 [2024-06-10 12:28:18.641250] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.101 [2024-06-10 12:28:18.641261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:13.102 [2024-06-10 12:28:18.641273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649223] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:13.102 [2024-06-10 12:28:18.649228] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649232] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649243] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649246] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649356] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649361] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649461] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649466] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649470] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649549] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649562] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649568] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649571] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649585] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649647] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649662] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:13.102 [2024-06-10 12:28:18.649665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649670] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649774] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:13.102 [2024-06-10 12:28:18.649777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649786] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649866] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649872] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649879] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:13.102 [2024-06-10 12:28:18.649885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.649894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.649901] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.649962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.102 [2024-06-10 12:28:18.649967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.102 [2024-06-10 12:28:18.649969] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.102 [2024-06-10 12:28:18.649975] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:13.102 [2024-06-10 12:28:18.649978] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:13.102 [2024-06-10 12:28:18.649983] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:13.102 [2024-06-10 12:28:18.649989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:13.102 [2024-06-10 12:28:18.649995] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.649999] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.102 [2024-06-10 12:28:18.650004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.102 [2024-06-10 12:28:18.650011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.102 [2024-06-10 12:28:18.650097] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.102 [2024-06-10 12:28:18.650101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.102 [2024-06-10 12:28:18.650104] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.102 [2024-06-10 12:28:18.650107] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198eec0): datao=0, datal=4096, cccid=0 00:24:13.103 [2024-06-10 12:28:18.650110] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a13b10) on tqpair(0x198eec0): expected_datao=0, payload_size=4096 00:24:13.103 [2024-06-10 12:28:18.650113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650130] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650134] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.103 [2024-06-10 12:28:18.650228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.103 [2024-06-10 12:28:18.650230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.103 [2024-06-10 12:28:18.650238] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:13.103 [2024-06-10 12:28:18.650242] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:13.103 [2024-06-10 12:28:18.650245] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:13.103 [2024-06-10 12:28:18.650250] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:13.103 [2024-06-10 12:28:18.650254] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:13.103 [2024-06-10 12:28:18.650257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:13.103 [2024-06-10 12:28:18.650262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:13.103 [2024-06-10 12:28:18.650267] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.103 [2024-06-10 12:28:18.650285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.103 [2024-06-10 12:28:18.650352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.103 [2024-06-10 12:28:18.650356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.103 [2024-06-10 12:28:18.650359] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650361] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13b10) on tqpair=0x198eec0 00:24:13.103 [2024-06-10 12:28:18.650367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.103 [2024-06-10 12:28:18.650383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.103 [2024-06-10 12:28:18.650396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650401] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.103 [2024-06-10 12:28:18.650408] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.103 [2024-06-10 12:28:18.650420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:13.103 [2024-06-10 12:28:18.650429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:13.103 [2024-06-10 12:28:18.650434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.103 [2024-06-10 12:28:18.650449] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13b10, cid 0, qid 0 00:24:13.103 [2024-06-10 12:28:18.650452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13c70, cid 1, qid 0 00:24:13.103 [2024-06-10 12:28:18.650455] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13dd0, cid 2, qid 0 00:24:13.103 [2024-06-10 12:28:18.650459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.103 [2024-06-10 12:28:18.650462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a14090, cid 4, qid 0 00:24:13.103 [2024-06-10 12:28:18.650572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.103 [2024-06-10 12:28:18.650576] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.103 [2024-06-10 12:28:18.650579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a14090) on tqpair=0x198eec0 00:24:13.103 [2024-06-10 12:28:18.650585] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:13.103 [2024-06-10 12:28:18.650589] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:13.103 [2024-06-10 12:28:18.650596] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.650603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.103 [2024-06-10 12:28:18.650611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a14090, cid 4, qid 0 00:24:13.103 [2024-06-10 12:28:18.650679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.103 [2024-06-10 12:28:18.650683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.103 [2024-06-10 12:28:18.650686] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650688] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198eec0): datao=0, datal=4096, cccid=4 00:24:13.103 [2024-06-10 12:28:18.650691] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a14090) on tqpair(0x198eec0): expected_datao=0, payload_size=4096 00:24:13.103 [2024-06-10 12:28:18.650694] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650706] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.650709] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.103 [2024-06-10 12:28:18.692244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.103 [2024-06-10 12:28:18.692247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a14090) on tqpair=0x198eec0 00:24:13.103 [2024-06-10 12:28:18.692261] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:13.103 [2024-06-10 12:28:18.692279] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.692287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.103 [2024-06-10 12:28:18.692292] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x198eec0) 00:24:13.103 [2024-06-10 12:28:18.692301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.103 [2024-06-10 12:28:18.692312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a14090, cid 4, qid 0 00:24:13.103 [2024-06-10 12:28:18.692315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a141f0, cid 5, qid 0 00:24:13.103 [2024-06-10 12:28:18.692419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.103 [2024-06-10 12:28:18.692423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.103 [2024-06-10 12:28:18.692426] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692428] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198eec0): datao=0, datal=1024, cccid=4 00:24:13.103 [2024-06-10 12:28:18.692431] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a14090) on tqpair(0x198eec0): expected_datao=0, payload_size=1024 00:24:13.103 [2024-06-10 12:28:18.692434] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692439] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692441] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692445] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.103 [2024-06-10 12:28:18.692449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.103 [2024-06-10 12:28:18.692452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.103 [2024-06-10 12:28:18.692454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a141f0) on tqpair=0x198eec0 00:24:13.368 [2024-06-10 12:28:18.734246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.368 [2024-06-10 12:28:18.734255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.368 [2024-06-10 12:28:18.734258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a14090) on tqpair=0x198eec0 00:24:13.368 [2024-06-10 12:28:18.734269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198eec0) 00:24:13.368 [2024-06-10 12:28:18.734276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.368 [2024-06-10 12:28:18.734286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a14090, cid 4, qid 0 00:24:13.368 [2024-06-10 12:28:18.734354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.368 [2024-06-10 12:28:18.734359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.368 [2024-06-10 12:28:18.734361] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734363] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198eec0): datao=0, datal=3072, cccid=4 00:24:13.368 [2024-06-10 12:28:18.734366] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a14090) on tqpair(0x198eec0): expected_datao=0, payload_size=3072 00:24:13.368 [2024-06-10 12:28:18.734369] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734411] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734413] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.368 [2024-06-10 12:28:18.734450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.368 [2024-06-10 12:28:18.734452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a14090) on tqpair=0x198eec0 00:24:13.368 [2024-06-10 12:28:18.734461] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734464] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198eec0) 00:24:13.368 [2024-06-10 12:28:18.734468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.368 [2024-06-10 12:28:18.734477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a14090, cid 4, qid 0 00:24:13.368 [2024-06-10 12:28:18.734545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.368 [2024-06-10 12:28:18.734549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.368 [2024-06-10 12:28:18.734552] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734554] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198eec0): datao=0, datal=8, cccid=4 00:24:13.368 [2024-06-10 12:28:18.734557] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a14090) on tqpair(0x198eec0): expected_datao=0, payload_size=8 00:24:13.368 [2024-06-10 12:28:18.734560] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734564] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.734567] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.776251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.368 [2024-06-10 12:28:18.776261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.368 [2024-06-10 12:28:18.776264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.368 [2024-06-10 12:28:18.776267] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a14090) on tqpair=0x198eec0 00:24:13.368 ===================================================== 00:24:13.368 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:13.368 ===================================================== 00:24:13.368 Controller Capabilities/Features 00:24:13.368 ================================ 00:24:13.368 Vendor ID: 0000 00:24:13.368 Subsystem Vendor ID: 0000 00:24:13.368 Serial Number: .................... 00:24:13.368 Model Number: ........................................ 00:24:13.368 Firmware Version: 24.09 00:24:13.368 Recommended Arb Burst: 0 00:24:13.368 IEEE OUI Identifier: 00 00 00 00:24:13.368 Multi-path I/O 00:24:13.368 May have multiple subsystem ports: No 00:24:13.368 May have multiple controllers: No 00:24:13.368 Associated with SR-IOV VF: No 00:24:13.368 Max Data Transfer Size: 131072 00:24:13.368 Max Number of Namespaces: 0 00:24:13.368 Max Number of I/O Queues: 1024 00:24:13.368 NVMe Specification Version (VS): 1.3 00:24:13.368 NVMe Specification Version (Identify): 1.3 00:24:13.368 Maximum Queue Entries: 128 00:24:13.368 Contiguous Queues Required: Yes 00:24:13.368 Arbitration Mechanisms Supported 00:24:13.368 Weighted Round Robin: Not Supported 00:24:13.368 Vendor Specific: Not Supported 00:24:13.368 Reset Timeout: 15000 ms 00:24:13.368 Doorbell Stride: 4 bytes 00:24:13.368 NVM Subsystem Reset: Not Supported 00:24:13.368 Command Sets Supported 00:24:13.368 NVM Command Set: Supported 00:24:13.368 Boot Partition: Not Supported 00:24:13.368 Memory Page Size Minimum: 4096 bytes 00:24:13.368 Memory Page Size Maximum: 4096 bytes 00:24:13.368 Persistent Memory Region: Not Supported 00:24:13.368 Optional Asynchronous Events Supported 00:24:13.368 Namespace Attribute Notices: Not Supported 00:24:13.368 Firmware Activation Notices: Not Supported 00:24:13.368 ANA Change Notices: Not Supported 00:24:13.368 PLE Aggregate Log Change Notices: Not Supported 00:24:13.368 LBA Status Info Alert Notices: Not Supported 00:24:13.368 EGE Aggregate Log Change Notices: Not Supported 00:24:13.368 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.368 Zone Descriptor Change Notices: Not Supported 00:24:13.368 Discovery Log Change Notices: Supported 00:24:13.368 Controller Attributes 00:24:13.368 128-bit Host Identifier: Not Supported 00:24:13.368 Non-Operational Permissive Mode: Not Supported 00:24:13.368 NVM Sets: Not Supported 00:24:13.368 Read Recovery Levels: Not Supported 00:24:13.368 Endurance Groups: Not Supported 00:24:13.368 Predictable Latency Mode: Not Supported 00:24:13.368 Traffic Based Keep ALive: Not Supported 00:24:13.368 Namespace Granularity: Not Supported 00:24:13.368 SQ Associations: Not Supported 00:24:13.368 UUID List: Not Supported 00:24:13.368 Multi-Domain Subsystem: Not Supported 00:24:13.368 Fixed Capacity Management: Not Supported 00:24:13.368 Variable Capacity Management: Not Supported 00:24:13.368 Delete Endurance Group: Not Supported 00:24:13.368 Delete NVM Set: Not Supported 00:24:13.368 Extended LBA Formats Supported: Not Supported 00:24:13.368 Flexible Data Placement Supported: Not Supported 00:24:13.368 00:24:13.368 Controller Memory Buffer Support 00:24:13.368 ================================ 00:24:13.368 Supported: No 00:24:13.368 00:24:13.368 Persistent Memory Region Support 00:24:13.368 ================================ 00:24:13.368 Supported: No 00:24:13.368 00:24:13.368 Admin Command Set Attributes 00:24:13.368 ============================ 00:24:13.368 Security Send/Receive: Not Supported 00:24:13.368 Format NVM: Not Supported 00:24:13.368 Firmware Activate/Download: Not Supported 00:24:13.368 Namespace Management: Not Supported 00:24:13.368 Device Self-Test: Not Supported 00:24:13.368 Directives: Not Supported 00:24:13.368 NVMe-MI: Not Supported 00:24:13.368 Virtualization Management: Not Supported 00:24:13.368 Doorbell Buffer Config: Not Supported 00:24:13.368 Get LBA Status Capability: Not Supported 00:24:13.368 Command & Feature Lockdown Capability: Not Supported 00:24:13.368 Abort Command Limit: 1 00:24:13.368 Async Event Request Limit: 4 00:24:13.368 Number of Firmware Slots: N/A 00:24:13.368 Firmware Slot 1 Read-Only: N/A 00:24:13.368 Firmware Activation Without Reset: N/A 00:24:13.368 Multiple Update Detection Support: N/A 00:24:13.368 Firmware Update Granularity: No Information Provided 00:24:13.368 Per-Namespace SMART Log: No 00:24:13.368 Asymmetric Namespace Access Log Page: Not Supported 00:24:13.368 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:13.368 Command Effects Log Page: Not Supported 00:24:13.368 Get Log Page Extended Data: Supported 00:24:13.368 Telemetry Log Pages: Not Supported 00:24:13.368 Persistent Event Log Pages: Not Supported 00:24:13.369 Supported Log Pages Log Page: May Support 00:24:13.369 Commands Supported & Effects Log Page: Not Supported 00:24:13.369 Feature Identifiers & Effects Log Page:May Support 00:24:13.369 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.369 Data Area 4 for Telemetry Log: Not Supported 00:24:13.369 Error Log Page Entries Supported: 128 00:24:13.369 Keep Alive: Not Supported 00:24:13.369 00:24:13.369 NVM Command Set Attributes 00:24:13.369 ========================== 00:24:13.369 Submission Queue Entry Size 00:24:13.369 Max: 1 00:24:13.369 Min: 1 00:24:13.369 Completion Queue Entry Size 00:24:13.369 Max: 1 00:24:13.369 Min: 1 00:24:13.369 Number of Namespaces: 0 00:24:13.369 Compare Command: Not Supported 00:24:13.369 Write Uncorrectable Command: Not Supported 00:24:13.369 Dataset Management Command: Not Supported 00:24:13.369 Write Zeroes Command: Not Supported 00:24:13.369 Set Features Save Field: Not Supported 00:24:13.369 Reservations: Not Supported 00:24:13.369 Timestamp: Not Supported 00:24:13.369 Copy: Not Supported 00:24:13.369 Volatile Write Cache: Not Present 00:24:13.369 Atomic Write Unit (Normal): 1 00:24:13.369 Atomic Write Unit (PFail): 1 00:24:13.369 Atomic Compare & Write Unit: 1 00:24:13.369 Fused Compare & Write: Supported 00:24:13.369 Scatter-Gather List 00:24:13.369 SGL Command Set: Supported 00:24:13.369 SGL Keyed: Supported 00:24:13.369 SGL Bit Bucket Descriptor: Not Supported 00:24:13.369 SGL Metadata Pointer: Not Supported 00:24:13.369 Oversized SGL: Not Supported 00:24:13.369 SGL Metadata Address: Not Supported 00:24:13.369 SGL Offset: Supported 00:24:13.369 Transport SGL Data Block: Not Supported 00:24:13.369 Replay Protected Memory Block: Not Supported 00:24:13.369 00:24:13.369 Firmware Slot Information 00:24:13.369 ========================= 00:24:13.369 Active slot: 0 00:24:13.369 00:24:13.369 00:24:13.369 Error Log 00:24:13.369 ========= 00:24:13.369 00:24:13.369 Active Namespaces 00:24:13.369 ================= 00:24:13.369 Discovery Log Page 00:24:13.369 ================== 00:24:13.369 Generation Counter: 2 00:24:13.369 Number of Records: 2 00:24:13.369 Record Format: 0 00:24:13.369 00:24:13.369 Discovery Log Entry 0 00:24:13.369 ---------------------- 00:24:13.369 Transport Type: 3 (TCP) 00:24:13.369 Address Family: 1 (IPv4) 00:24:13.369 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:13.369 Entry Flags: 00:24:13.369 Duplicate Returned Information: 1 00:24:13.369 Explicit Persistent Connection Support for Discovery: 1 00:24:13.369 Transport Requirements: 00:24:13.369 Secure Channel: Not Required 00:24:13.369 Port ID: 0 (0x0000) 00:24:13.369 Controller ID: 65535 (0xffff) 00:24:13.369 Admin Max SQ Size: 128 00:24:13.369 Transport Service Identifier: 4420 00:24:13.369 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:13.369 Transport Address: 10.0.0.2 00:24:13.369 Discovery Log Entry 1 00:24:13.369 ---------------------- 00:24:13.369 Transport Type: 3 (TCP) 00:24:13.369 Address Family: 1 (IPv4) 00:24:13.369 Subsystem Type: 2 (NVM Subsystem) 00:24:13.369 Entry Flags: 00:24:13.369 Duplicate Returned Information: 0 00:24:13.369 Explicit Persistent Connection Support for Discovery: 0 00:24:13.369 Transport Requirements: 00:24:13.369 Secure Channel: Not Required 00:24:13.369 Port ID: 0 (0x0000) 00:24:13.369 Controller ID: 65535 (0xffff) 00:24:13.369 Admin Max SQ Size: 128 00:24:13.369 Transport Service Identifier: 4420 00:24:13.369 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:13.369 Transport Address: 10.0.0.2 [2024-06-10 12:28:18.776333] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:13.369 [2024-06-10 12:28:18.776344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.369 [2024-06-10 12:28:18.776350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.369 [2024-06-10 12:28:18.776355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.369 [2024-06-10 12:28:18.776359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.369 [2024-06-10 12:28:18.776365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.369 [2024-06-10 12:28:18.776375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.369 [2024-06-10 12:28:18.776387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.369 [2024-06-10 12:28:18.776446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.369 [2024-06-10 12:28:18.776451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.369 [2024-06-10 12:28:18.776453] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.369 [2024-06-10 12:28:18.776463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.369 [2024-06-10 12:28:18.776472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.369 [2024-06-10 12:28:18.776482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.369 [2024-06-10 12:28:18.776583] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.369 [2024-06-10 12:28:18.776587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.369 [2024-06-10 12:28:18.776589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.369 [2024-06-10 12:28:18.776596] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:13.369 [2024-06-10 12:28:18.776599] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:13.369 [2024-06-10 12:28:18.776606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.369 [2024-06-10 12:28:18.776616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.369 [2024-06-10 12:28:18.776622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.369 [2024-06-10 12:28:18.776688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.369 [2024-06-10 12:28:18.776692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.369 [2024-06-10 12:28:18.776694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.369 [2024-06-10 12:28:18.776705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.369 [2024-06-10 12:28:18.776716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.369 [2024-06-10 12:28:18.776723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.369 [2024-06-10 12:28:18.776805] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.369 [2024-06-10 12:28:18.776809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.369 [2024-06-10 12:28:18.776812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.369 [2024-06-10 12:28:18.776821] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.369 [2024-06-10 12:28:18.776831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.369 [2024-06-10 12:28:18.776837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.369 [2024-06-10 12:28:18.776902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.369 [2024-06-10 12:28:18.776906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.369 [2024-06-10 12:28:18.776909] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.369 [2024-06-10 12:28:18.776918] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.369 [2024-06-10 12:28:18.776923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.370 [2024-06-10 12:28:18.776928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.776935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.370 [2024-06-10 12:28:18.776997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.777001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.777003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.370 [2024-06-10 12:28:18.777013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.370 [2024-06-10 12:28:18.777023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.777029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.370 [2024-06-10 12:28:18.777087] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.777091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.777094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.370 [2024-06-10 12:28:18.777103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777106] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.370 [2024-06-10 12:28:18.777115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.777122] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.370 [2024-06-10 12:28:18.777178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.777182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.777185] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.777187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.370 [2024-06-10 12:28:18.781198] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.781202] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.781204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198eec0) 00:24:13.370 [2024-06-10 12:28:18.781209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.781216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a13f30, cid 3, qid 0 00:24:13.370 [2024-06-10 12:28:18.781280] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.781284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.781287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.781289] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a13f30) on tqpair=0x198eec0 00:24:13.370 [2024-06-10 12:28:18.781295] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:13.370 00:24:13.370 12:28:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:13.370 [2024-06-10 12:28:18.817637] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:13.370 [2024-06-10 12:28:18.817703] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753186 ] 00:24:13.370 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.370 [2024-06-10 12:28:18.846221] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:13.370 [2024-06-10 12:28:18.846253] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:13.370 [2024-06-10 12:28:18.846256] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:13.370 [2024-06-10 12:28:18.846266] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:13.370 [2024-06-10 12:28:18.846273] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:13.370 [2024-06-10 12:28:18.846706] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:13.370 [2024-06-10 12:28:18.846723] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf57ec0 0 00:24:13.370 [2024-06-10 12:28:18.861201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:13.370 [2024-06-10 12:28:18.861209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:13.370 [2024-06-10 12:28:18.861212] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:13.370 [2024-06-10 12:28:18.861214] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:13.370 [2024-06-10 12:28:18.861239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.861243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.861246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.370 [2024-06-10 12:28:18.861255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:13.370 [2024-06-10 12:28:18.861266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.370 [2024-06-10 12:28:18.869203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.869209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.869211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.370 [2024-06-10 12:28:18.869221] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:13.370 [2024-06-10 12:28:18.869225] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:13.370 [2024-06-10 12:28:18.869229] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:13.370 [2024-06-10 12:28:18.869237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.370 [2024-06-10 12:28:18.869247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.869256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.370 [2024-06-10 12:28:18.869453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.869457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.869460] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.370 [2024-06-10 12:28:18.869465] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:13.370 [2024-06-10 12:28:18.869470] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:13.370 [2024-06-10 12:28:18.869475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869477] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.370 [2024-06-10 12:28:18.869484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.869491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.370 [2024-06-10 12:28:18.869713] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.869717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.869720] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869722] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.370 [2024-06-10 12:28:18.869726] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:13.370 [2024-06-10 12:28:18.869731] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:13.370 [2024-06-10 12:28:18.869735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.370 [2024-06-10 12:28:18.869747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.869754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.370 [2024-06-10 12:28:18.869979] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.869984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.869986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.869988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.370 [2024-06-10 12:28:18.869992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:13.370 [2024-06-10 12:28:18.869998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.870001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.370 [2024-06-10 12:28:18.870003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.370 [2024-06-10 12:28:18.870008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.370 [2024-06-10 12:28:18.870014] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.370 [2024-06-10 12:28:18.870206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.370 [2024-06-10 12:28:18.870210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.370 [2024-06-10 12:28:18.870213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.870219] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:13.371 [2024-06-10 12:28:18.870222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:13.371 [2024-06-10 12:28:18.870227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:13.371 [2024-06-10 12:28:18.870331] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:13.371 [2024-06-10 12:28:18.870333] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:13.371 [2024-06-10 12:28:18.870339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.870348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.371 [2024-06-10 12:28:18.870355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.371 [2024-06-10 12:28:18.870546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.371 [2024-06-10 12:28:18.870550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.371 [2024-06-10 12:28:18.870553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.870559] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:13.371 [2024-06-10 12:28:18.870566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870569] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.870576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.371 [2024-06-10 12:28:18.870583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.371 [2024-06-10 12:28:18.870799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.371 [2024-06-10 12:28:18.870804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.371 [2024-06-10 12:28:18.870806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870809] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.870812] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:13.371 [2024-06-10 12:28:18.870815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.870820] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:13.371 [2024-06-10 12:28:18.870825] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.870831] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.870833] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.870838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.371 [2024-06-10 12:28:18.870845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.371 [2024-06-10 12:28:18.871052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.371 [2024-06-10 12:28:18.871056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.371 [2024-06-10 12:28:18.871058] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871061] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=4096, cccid=0 00:24:13.371 [2024-06-10 12:28:18.871064] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdcb10) on tqpair(0xf57ec0): expected_datao=0, payload_size=4096 00:24:13.371 [2024-06-10 12:28:18.871067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871083] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871086] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.371 [2024-06-10 12:28:18.871242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.371 [2024-06-10 12:28:18.871244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871247] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.871251] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:13.371 [2024-06-10 12:28:18.871255] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:13.371 [2024-06-10 12:28:18.871258] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:13.371 [2024-06-10 12:28:18.871262] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:13.371 [2024-06-10 12:28:18.871265] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:13.371 [2024-06-10 12:28:18.871270] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871275] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871285] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.371 [2024-06-10 12:28:18.871297] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.371 [2024-06-10 12:28:18.871489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.371 [2024-06-10 12:28:18.871493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.371 [2024-06-10 12:28:18.871495] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871498] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcb10) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.871502] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.371 [2024-06-10 12:28:18.871515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871518] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.371 [2024-06-10 12:28:18.871528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871533] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.371 [2024-06-10 12:28:18.871541] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.371 [2024-06-10 12:28:18.871553] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871565] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.371 [2024-06-10 12:28:18.871572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.371 [2024-06-10 12:28:18.871579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcb10, cid 0, qid 0 00:24:13.371 [2024-06-10 12:28:18.871586] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcc70, cid 1, qid 0 00:24:13.371 [2024-06-10 12:28:18.871589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcdd0, cid 2, qid 0 00:24:13.371 [2024-06-10 12:28:18.871592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcf30, cid 3, qid 0 00:24:13.371 [2024-06-10 12:28:18.871595] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.371 [2024-06-10 12:28:18.871818] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.371 [2024-06-10 12:28:18.871823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.371 [2024-06-10 12:28:18.871825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.371 [2024-06-10 12:28:18.871827] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.371 [2024-06-10 12:28:18.871831] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:13.371 [2024-06-10 12:28:18.871834] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871839] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871844] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:13.371 [2024-06-10 12:28:18.871848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.871851] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.871853] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.871857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.372 [2024-06-10 12:28:18.871864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.372 [2024-06-10 12:28:18.872066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.372 [2024-06-10 12:28:18.872070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.372 [2024-06-10 12:28:18.872072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.872075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.372 [2024-06-10 12:28:18.872111] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.872117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.872123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.872125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.872130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.372 [2024-06-10 12:28:18.872136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.372 [2024-06-10 12:28:18.872345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.372 [2024-06-10 12:28:18.872350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.372 [2024-06-10 12:28:18.872352] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.872355] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=4096, cccid=4 00:24:13.372 [2024-06-10 12:28:18.872358] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd090) on tqpair(0xf57ec0): expected_datao=0, payload_size=4096 00:24:13.372 [2024-06-10 12:28:18.872361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.872405] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.872408] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.372 [2024-06-10 12:28:18.915207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.372 [2024-06-10 12:28:18.915210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915212] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.372 [2024-06-10 12:28:18.915220] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:13.372 [2024-06-10 12:28:18.915231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.915238] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.915243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915245] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.915250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.372 [2024-06-10 12:28:18.915258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.372 [2024-06-10 12:28:18.915439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.372 [2024-06-10 12:28:18.915443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.372 [2024-06-10 12:28:18.915445] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915448] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=4096, cccid=4 00:24:13.372 [2024-06-10 12:28:18.915451] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd090) on tqpair(0xf57ec0): expected_datao=0, payload_size=4096 00:24:13.372 [2024-06-10 12:28:18.915454] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915467] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.915469] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.372 [2024-06-10 12:28:18.957360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.372 [2024-06-10 12:28:18.957362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.372 [2024-06-10 12:28:18.957375] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.957395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.372 [2024-06-10 12:28:18.957403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.372 [2024-06-10 12:28:18.957589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.372 [2024-06-10 12:28:18.957593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.372 [2024-06-10 12:28:18.957596] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957600] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=4096, cccid=4 00:24:13.372 [2024-06-10 12:28:18.957603] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd090) on tqpair(0xf57ec0): expected_datao=0, payload_size=4096 00:24:13.372 [2024-06-10 12:28:18.957606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957622] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957625] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.372 [2024-06-10 12:28:18.957807] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.372 [2024-06-10 12:28:18.957809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.372 [2024-06-10 12:28:18.957817] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957822] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957828] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957832] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957835] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957839] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:13.372 [2024-06-10 12:28:18.957842] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:13.372 [2024-06-10 12:28:18.957845] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:13.372 [2024-06-10 12:28:18.957857] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957860] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.957864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.372 [2024-06-10 12:28:18.957869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.372 [2024-06-10 12:28:18.957874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf57ec0) 00:24:13.372 [2024-06-10 12:28:18.957878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.372 [2024-06-10 12:28:18.957887] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.372 [2024-06-10 12:28:18.957891] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd1f0, cid 5, qid 0 00:24:13.372 [2024-06-10 12:28:18.958056] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.372 [2024-06-10 12:28:18.958061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.372 [2024-06-10 12:28:18.958063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.958065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.373 [2024-06-10 12:28:18.958070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.373 [2024-06-10 12:28:18.958074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.373 [2024-06-10 12:28:18.958076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.958079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd1f0) on tqpair=0xf57ec0 00:24:13.373 [2024-06-10 12:28:18.958086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.958089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.958093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.958100] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd1f0, cid 5, qid 0 00:24:13.373 [2024-06-10 12:28:18.963198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.373 [2024-06-10 12:28:18.963204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.373 [2024-06-10 12:28:18.963206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd1f0) on tqpair=0xf57ec0 00:24:13.373 [2024-06-10 12:28:18.963215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963218] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963229] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd1f0, cid 5, qid 0 00:24:13.373 [2024-06-10 12:28:18.963425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.373 [2024-06-10 12:28:18.963429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.373 [2024-06-10 12:28:18.963432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd1f0) on tqpair=0xf57ec0 00:24:13.373 [2024-06-10 12:28:18.963440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963443] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963453] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd1f0, cid 5, qid 0 00:24:13.373 [2024-06-10 12:28:18.963675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.373 [2024-06-10 12:28:18.963679] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.373 [2024-06-10 12:28:18.963682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd1f0) on tqpair=0xf57ec0 00:24:13.373 [2024-06-10 12:28:18.963692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963695] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.963732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf57ec0) 00:24:13.373 [2024-06-10 12:28:18.963736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.373 [2024-06-10 12:28:18.963744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd1f0, cid 5, qid 0 00:24:13.373 [2024-06-10 12:28:18.963747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd090, cid 4, qid 0 00:24:13.373 [2024-06-10 12:28:18.963750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd350, cid 6, qid 0 00:24:13.373 [2024-06-10 12:28:18.963754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd4b0, cid 7, qid 0 00:24:13.373 [2024-06-10 12:28:18.964008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.373 [2024-06-10 12:28:18.964012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.373 [2024-06-10 12:28:18.964014] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964017] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=8192, cccid=5 00:24:13.373 [2024-06-10 12:28:18.964020] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd1f0) on tqpair(0xf57ec0): expected_datao=0, payload_size=8192 00:24:13.373 [2024-06-10 12:28:18.964023] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964114] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964117] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.373 [2024-06-10 12:28:18.964125] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.373 [2024-06-10 12:28:18.964127] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964129] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=512, cccid=4 00:24:13.373 [2024-06-10 12:28:18.964132] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd090) on tqpair(0xf57ec0): expected_datao=0, payload_size=512 00:24:13.373 [2024-06-10 12:28:18.964135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964140] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964142] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.373 [2024-06-10 12:28:18.964149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.373 [2024-06-10 12:28:18.964152] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964154] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=512, cccid=6 00:24:13.373 [2024-06-10 12:28:18.964157] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd350) on tqpair(0xf57ec0): expected_datao=0, payload_size=512 00:24:13.373 [2024-06-10 12:28:18.964160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964164] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964166] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.373 [2024-06-10 12:28:18.964174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.373 [2024-06-10 12:28:18.964176] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964179] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf57ec0): datao=0, datal=4096, cccid=7 00:24:13.373 [2024-06-10 12:28:18.964183] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd4b0) on tqpair(0xf57ec0): expected_datao=0, payload_size=4096 00:24:13.373 [2024-06-10 12:28:18.964188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964206] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.373 [2024-06-10 12:28:18.964209] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.635 [2024-06-10 12:28:19.005385] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.635 [2024-06-10 12:28:19.005393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.635 [2024-06-10 12:28:19.005395] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.635 [2024-06-10 12:28:19.005398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd1f0) on tqpair=0xf57ec0 00:24:13.635 [2024-06-10 12:28:19.005407] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.635 [2024-06-10 12:28:19.005411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.635 [2024-06-10 12:28:19.005414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.635 [2024-06-10 12:28:19.005416] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd090) on tqpair=0xf57ec0 00:24:13.635 [2024-06-10 12:28:19.005422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.635 [2024-06-10 12:28:19.005426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.635 [2024-06-10 12:28:19.005428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.635 [2024-06-10 12:28:19.005431] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd350) on tqpair=0xf57ec0 00:24:13.635 [2024-06-10 12:28:19.005439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.635 [2024-06-10 12:28:19.005443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.635 [2024-06-10 12:28:19.005445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.635 [2024-06-10 12:28:19.005448] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd4b0) on tqpair=0xf57ec0 00:24:13.635 ===================================================== 00:24:13.635 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.635 ===================================================== 00:24:13.635 Controller Capabilities/Features 00:24:13.635 ================================ 00:24:13.635 Vendor ID: 8086 00:24:13.635 Subsystem Vendor ID: 8086 00:24:13.635 Serial Number: SPDK00000000000001 00:24:13.635 Model Number: SPDK bdev Controller 00:24:13.635 Firmware Version: 24.09 00:24:13.635 Recommended Arb Burst: 6 00:24:13.635 IEEE OUI Identifier: e4 d2 5c 00:24:13.635 Multi-path I/O 00:24:13.635 May have multiple subsystem ports: Yes 00:24:13.635 May have multiple controllers: Yes 00:24:13.635 Associated with SR-IOV VF: No 00:24:13.635 Max Data Transfer Size: 131072 00:24:13.635 Max Number of Namespaces: 32 00:24:13.635 Max Number of I/O Queues: 127 00:24:13.635 NVMe Specification Version (VS): 1.3 00:24:13.635 NVMe Specification Version (Identify): 1.3 00:24:13.635 Maximum Queue Entries: 128 00:24:13.635 Contiguous Queues Required: Yes 00:24:13.635 Arbitration Mechanisms Supported 00:24:13.635 Weighted Round Robin: Not Supported 00:24:13.635 Vendor Specific: Not Supported 00:24:13.635 Reset Timeout: 15000 ms 00:24:13.635 Doorbell Stride: 4 bytes 00:24:13.635 NVM Subsystem Reset: Not Supported 00:24:13.635 Command Sets Supported 00:24:13.635 NVM Command Set: Supported 00:24:13.635 Boot Partition: Not Supported 00:24:13.635 Memory Page Size Minimum: 4096 bytes 00:24:13.635 Memory Page Size Maximum: 4096 bytes 00:24:13.635 Persistent Memory Region: Not Supported 00:24:13.635 Optional Asynchronous Events Supported 00:24:13.635 Namespace Attribute Notices: Supported 00:24:13.635 Firmware Activation Notices: Not Supported 00:24:13.635 ANA Change Notices: Not Supported 00:24:13.635 PLE Aggregate Log Change Notices: Not Supported 00:24:13.635 LBA Status Info Alert Notices: Not Supported 00:24:13.635 EGE Aggregate Log Change Notices: Not Supported 00:24:13.635 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.635 Zone Descriptor Change Notices: Not Supported 00:24:13.635 Discovery Log Change Notices: Not Supported 00:24:13.635 Controller Attributes 00:24:13.635 128-bit Host Identifier: Supported 00:24:13.635 Non-Operational Permissive Mode: Not Supported 00:24:13.635 NVM Sets: Not Supported 00:24:13.635 Read Recovery Levels: Not Supported 00:24:13.635 Endurance Groups: Not Supported 00:24:13.635 Predictable Latency Mode: Not Supported 00:24:13.635 Traffic Based Keep ALive: Not Supported 00:24:13.635 Namespace Granularity: Not Supported 00:24:13.635 SQ Associations: Not Supported 00:24:13.635 UUID List: Not Supported 00:24:13.635 Multi-Domain Subsystem: Not Supported 00:24:13.635 Fixed Capacity Management: Not Supported 00:24:13.635 Variable Capacity Management: Not Supported 00:24:13.635 Delete Endurance Group: Not Supported 00:24:13.635 Delete NVM Set: Not Supported 00:24:13.635 Extended LBA Formats Supported: Not Supported 00:24:13.635 Flexible Data Placement Supported: Not Supported 00:24:13.635 00:24:13.635 Controller Memory Buffer Support 00:24:13.635 ================================ 00:24:13.635 Supported: No 00:24:13.635 00:24:13.635 Persistent Memory Region Support 00:24:13.636 ================================ 00:24:13.636 Supported: No 00:24:13.636 00:24:13.636 Admin Command Set Attributes 00:24:13.636 ============================ 00:24:13.636 Security Send/Receive: Not Supported 00:24:13.636 Format NVM: Not Supported 00:24:13.636 Firmware Activate/Download: Not Supported 00:24:13.636 Namespace Management: Not Supported 00:24:13.636 Device Self-Test: Not Supported 00:24:13.636 Directives: Not Supported 00:24:13.636 NVMe-MI: Not Supported 00:24:13.636 Virtualization Management: Not Supported 00:24:13.636 Doorbell Buffer Config: Not Supported 00:24:13.636 Get LBA Status Capability: Not Supported 00:24:13.636 Command & Feature Lockdown Capability: Not Supported 00:24:13.636 Abort Command Limit: 4 00:24:13.636 Async Event Request Limit: 4 00:24:13.636 Number of Firmware Slots: N/A 00:24:13.636 Firmware Slot 1 Read-Only: N/A 00:24:13.636 Firmware Activation Without Reset: N/A 00:24:13.636 Multiple Update Detection Support: N/A 00:24:13.636 Firmware Update Granularity: No Information Provided 00:24:13.636 Per-Namespace SMART Log: No 00:24:13.636 Asymmetric Namespace Access Log Page: Not Supported 00:24:13.636 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:13.636 Command Effects Log Page: Supported 00:24:13.636 Get Log Page Extended Data: Supported 00:24:13.636 Telemetry Log Pages: Not Supported 00:24:13.636 Persistent Event Log Pages: Not Supported 00:24:13.636 Supported Log Pages Log Page: May Support 00:24:13.636 Commands Supported & Effects Log Page: Not Supported 00:24:13.636 Feature Identifiers & Effects Log Page:May Support 00:24:13.636 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.636 Data Area 4 for Telemetry Log: Not Supported 00:24:13.636 Error Log Page Entries Supported: 128 00:24:13.636 Keep Alive: Supported 00:24:13.636 Keep Alive Granularity: 10000 ms 00:24:13.636 00:24:13.636 NVM Command Set Attributes 00:24:13.636 ========================== 00:24:13.636 Submission Queue Entry Size 00:24:13.636 Max: 64 00:24:13.636 Min: 64 00:24:13.636 Completion Queue Entry Size 00:24:13.636 Max: 16 00:24:13.636 Min: 16 00:24:13.636 Number of Namespaces: 32 00:24:13.636 Compare Command: Supported 00:24:13.636 Write Uncorrectable Command: Not Supported 00:24:13.636 Dataset Management Command: Supported 00:24:13.636 Write Zeroes Command: Supported 00:24:13.636 Set Features Save Field: Not Supported 00:24:13.636 Reservations: Supported 00:24:13.636 Timestamp: Not Supported 00:24:13.636 Copy: Supported 00:24:13.636 Volatile Write Cache: Present 00:24:13.636 Atomic Write Unit (Normal): 1 00:24:13.636 Atomic Write Unit (PFail): 1 00:24:13.636 Atomic Compare & Write Unit: 1 00:24:13.636 Fused Compare & Write: Supported 00:24:13.636 Scatter-Gather List 00:24:13.636 SGL Command Set: Supported 00:24:13.636 SGL Keyed: Supported 00:24:13.636 SGL Bit Bucket Descriptor: Not Supported 00:24:13.636 SGL Metadata Pointer: Not Supported 00:24:13.636 Oversized SGL: Not Supported 00:24:13.636 SGL Metadata Address: Not Supported 00:24:13.636 SGL Offset: Supported 00:24:13.636 Transport SGL Data Block: Not Supported 00:24:13.636 Replay Protected Memory Block: Not Supported 00:24:13.636 00:24:13.636 Firmware Slot Information 00:24:13.636 ========================= 00:24:13.636 Active slot: 1 00:24:13.636 Slot 1 Firmware Revision: 24.09 00:24:13.636 00:24:13.636 00:24:13.636 Commands Supported and Effects 00:24:13.636 ============================== 00:24:13.636 Admin Commands 00:24:13.636 -------------- 00:24:13.636 Get Log Page (02h): Supported 00:24:13.636 Identify (06h): Supported 00:24:13.636 Abort (08h): Supported 00:24:13.636 Set Features (09h): Supported 00:24:13.636 Get Features (0Ah): Supported 00:24:13.636 Asynchronous Event Request (0Ch): Supported 00:24:13.636 Keep Alive (18h): Supported 00:24:13.636 I/O Commands 00:24:13.636 ------------ 00:24:13.636 Flush (00h): Supported LBA-Change 00:24:13.636 Write (01h): Supported LBA-Change 00:24:13.636 Read (02h): Supported 00:24:13.636 Compare (05h): Supported 00:24:13.636 Write Zeroes (08h): Supported LBA-Change 00:24:13.636 Dataset Management (09h): Supported LBA-Change 00:24:13.636 Copy (19h): Supported LBA-Change 00:24:13.636 Unknown (79h): Supported LBA-Change 00:24:13.636 Unknown (7Ah): Supported 00:24:13.636 00:24:13.636 Error Log 00:24:13.636 ========= 00:24:13.636 00:24:13.636 Arbitration 00:24:13.636 =========== 00:24:13.636 Arbitration Burst: 1 00:24:13.636 00:24:13.636 Power Management 00:24:13.636 ================ 00:24:13.636 Number of Power States: 1 00:24:13.636 Current Power State: Power State #0 00:24:13.636 Power State #0: 00:24:13.636 Max Power: 0.00 W 00:24:13.636 Non-Operational State: Operational 00:24:13.636 Entry Latency: Not Reported 00:24:13.636 Exit Latency: Not Reported 00:24:13.636 Relative Read Throughput: 0 00:24:13.636 Relative Read Latency: 0 00:24:13.636 Relative Write Throughput: 0 00:24:13.636 Relative Write Latency: 0 00:24:13.636 Idle Power: Not Reported 00:24:13.636 Active Power: Not Reported 00:24:13.636 Non-Operational Permissive Mode: Not Supported 00:24:13.636 00:24:13.636 Health Information 00:24:13.636 ================== 00:24:13.636 Critical Warnings: 00:24:13.636 Available Spare Space: OK 00:24:13.636 Temperature: OK 00:24:13.636 Device Reliability: OK 00:24:13.636 Read Only: No 00:24:13.636 Volatile Memory Backup: OK 00:24:13.636 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:13.636 Temperature Threshold: [2024-06-10 12:28:19.005518] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf57ec0) 00:24:13.636 [2024-06-10 12:28:19.005526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.636 [2024-06-10 12:28:19.005535] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd4b0, cid 7, qid 0 00:24:13.636 [2024-06-10 12:28:19.005652] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.636 [2024-06-10 12:28:19.005657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.636 [2024-06-10 12:28:19.005659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdd4b0) on tqpair=0xf57ec0 00:24:13.636 [2024-06-10 12:28:19.005683] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:13.636 [2024-06-10 12:28:19.005691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.636 [2024-06-10 12:28:19.005695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.636 [2024-06-10 12:28:19.005699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.636 [2024-06-10 12:28:19.005704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.636 [2024-06-10 12:28:19.005709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf57ec0) 00:24:13.636 [2024-06-10 12:28:19.005719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.636 [2024-06-10 12:28:19.005728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcf30, cid 3, qid 0 00:24:13.636 [2024-06-10 12:28:19.005892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.636 [2024-06-10 12:28:19.005896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.636 [2024-06-10 12:28:19.005898] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005901] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcf30) on tqpair=0xf57ec0 00:24:13.636 [2024-06-10 12:28:19.005906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.005911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf57ec0) 00:24:13.636 [2024-06-10 12:28:19.005915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.636 [2024-06-10 12:28:19.005923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcf30, cid 3, qid 0 00:24:13.636 [2024-06-10 12:28:19.010201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.636 [2024-06-10 12:28:19.010206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.636 [2024-06-10 12:28:19.010209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.010211] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcf30) on tqpair=0xf57ec0 00:24:13.636 [2024-06-10 12:28:19.010215] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:13.636 [2024-06-10 12:28:19.010218] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:13.636 [2024-06-10 12:28:19.010224] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.010227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.636 [2024-06-10 12:28:19.010230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf57ec0) 00:24:13.637 [2024-06-10 12:28:19.010234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.637 [2024-06-10 12:28:19.010242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdcf30, cid 3, qid 0 00:24:13.637 [2024-06-10 12:28:19.010434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.637 [2024-06-10 12:28:19.010439] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.637 [2024-06-10 12:28:19.010441] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.637 [2024-06-10 12:28:19.010444] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfdcf30) on tqpair=0xf57ec0 00:24:13.637 [2024-06-10 12:28:19.010449] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:24:13.637 0 Kelvin (-273 Celsius) 00:24:13.637 Available Spare: 0% 00:24:13.637 Available Spare Threshold: 0% 00:24:13.637 Life Percentage Used: 0% 00:24:13.637 Data Units Read: 0 00:24:13.637 Data Units Written: 0 00:24:13.637 Host Read Commands: 0 00:24:13.637 Host Write Commands: 0 00:24:13.637 Controller Busy Time: 0 minutes 00:24:13.637 Power Cycles: 0 00:24:13.637 Power On Hours: 0 hours 00:24:13.637 Unsafe Shutdowns: 0 00:24:13.637 Unrecoverable Media Errors: 0 00:24:13.637 Lifetime Error Log Entries: 0 00:24:13.637 Warning Temperature Time: 0 minutes 00:24:13.637 Critical Temperature Time: 0 minutes 00:24:13.637 00:24:13.637 Number of Queues 00:24:13.637 ================ 00:24:13.637 Number of I/O Submission Queues: 127 00:24:13.637 Number of I/O Completion Queues: 127 00:24:13.637 00:24:13.637 Active Namespaces 00:24:13.637 ================= 00:24:13.637 Namespace ID:1 00:24:13.637 Error Recovery Timeout: Unlimited 00:24:13.637 Command Set Identifier: NVM (00h) 00:24:13.637 Deallocate: Supported 00:24:13.637 Deallocated/Unwritten Error: Not Supported 00:24:13.637 Deallocated Read Value: Unknown 00:24:13.637 Deallocate in Write Zeroes: Not Supported 00:24:13.637 Deallocated Guard Field: 0xFFFF 00:24:13.637 Flush: Supported 00:24:13.637 Reservation: Supported 00:24:13.637 Namespace Sharing Capabilities: Multiple Controllers 00:24:13.637 Size (in LBAs): 131072 (0GiB) 00:24:13.637 Capacity (in LBAs): 131072 (0GiB) 00:24:13.637 Utilization (in LBAs): 131072 (0GiB) 00:24:13.637 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:13.637 EUI64: ABCDEF0123456789 00:24:13.637 UUID: 568d6cf9-21bd-4106-b91f-f314c81ee029 00:24:13.637 Thin Provisioning: Not Supported 00:24:13.637 Per-NS Atomic Units: Yes 00:24:13.637 Atomic Boundary Size (Normal): 0 00:24:13.637 Atomic Boundary Size (PFail): 0 00:24:13.637 Atomic Boundary Offset: 0 00:24:13.637 Maximum Single Source Range Length: 65535 00:24:13.637 Maximum Copy Length: 65535 00:24:13.637 Maximum Source Range Count: 1 00:24:13.637 NGUID/EUI64 Never Reused: No 00:24:13.637 Namespace Write Protected: No 00:24:13.637 Number of LBA Formats: 1 00:24:13.637 Current LBA Format: LBA Format #00 00:24:13.637 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:13.637 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.637 rmmod nvme_tcp 00:24:13.637 rmmod nvme_fabrics 00:24:13.637 rmmod nvme_keyring 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 752856 ']' 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 752856 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 752856 ']' 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 752856 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 752856 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 752856' 00:24:13.637 killing process with pid 752856 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 752856 00:24:13.637 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 752856 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.898 12:28:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.811 12:28:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.811 00:24:15.811 real 0m11.687s 00:24:15.811 user 0m8.197s 00:24:15.811 sys 0m6.181s 00:24:15.811 12:28:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:15.811 12:28:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.811 ************************************ 00:24:15.811 END TEST nvmf_identify 00:24:15.811 ************************************ 00:24:16.071 12:28:21 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:16.071 12:28:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:16.071 12:28:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:16.071 12:28:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.071 ************************************ 00:24:16.071 START TEST nvmf_perf 00:24:16.071 ************************************ 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:16.071 * Looking for test storage... 00:24:16.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.071 12:28:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.072 12:28:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:24.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:24.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:24.214 Found net devices under 0000:31:00.0: cvl_0_0 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:24.214 Found net devices under 0000:31:00.1: cvl_0_1 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:24:24.214 00:24:24.214 --- 10.0.0.2 ping statistics --- 00:24:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.214 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:24.214 00:24:24.214 --- 10.0.0.1 ping statistics --- 00:24:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.214 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.214 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=757854 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 757854 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 757854 ']' 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:24.215 12:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.215 [2024-06-10 12:28:29.756051] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:24.215 [2024-06-10 12:28:29.756129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.215 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.475 [2024-06-10 12:28:29.837029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.475 [2024-06-10 12:28:29.912657] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.475 [2024-06-10 12:28:29.912691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.475 [2024-06-10 12:28:29.912700] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.475 [2024-06-10 12:28:29.912706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.475 [2024-06-10 12:28:29.912711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.475 [2024-06-10 12:28:29.912850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.475 [2024-06-10 12:28:29.912969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.475 [2024-06-10 12:28:29.913125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.475 [2024-06-10 12:28:29.913126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:25.044 12:28:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:25.614 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:25.614 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:25.614 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:25.614 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:25.874 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:25.874 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:25.874 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:25.874 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:25.874 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:26.134 [2024-06-10 12:28:31.524899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.134 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.134 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:26.134 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.394 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:26.394 12:28:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:26.725 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.725 [2024-06-10 12:28:32.195429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.725 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:26.998 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:26.998 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:26.998 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:26.998 12:28:32 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:28.379 Initializing NVMe Controllers 00:24:28.379 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:28.379 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:28.379 Initialization complete. Launching workers. 00:24:28.379 ======================================================== 00:24:28.379 Latency(us) 00:24:28.379 Device Information : IOPS MiB/s Average min max 00:24:28.379 PCIE (0000:65:00.0) NSID 1 from core 0: 79282.00 309.70 402.98 19.57 6788.28 00:24:28.379 ======================================================== 00:24:28.379 Total : 79282.00 309.70 402.98 19.57 6788.28 00:24:28.379 00:24:28.379 12:28:33 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:28.379 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.760 Initializing NVMe Controllers 00:24:29.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:29.760 Initialization complete. Launching workers. 00:24:29.760 ======================================================== 00:24:29.760 Latency(us) 00:24:29.760 Device Information : IOPS MiB/s Average min max 00:24:29.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13032.29 241.05 45721.41 00:24:29.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23056.16 7960.12 47903.20 00:24:29.760 ======================================================== 00:24:29.760 Total : 124.00 0.48 16669.98 241.05 47903.20 00:24:29.760 00:24:29.760 12:28:34 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.760 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.698 Initializing NVMe Controllers 00:24:30.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:30.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:30.699 Initialization complete. Launching workers. 00:24:30.699 ======================================================== 00:24:30.699 Latency(us) 00:24:30.699 Device Information : IOPS MiB/s Average min max 00:24:30.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10536.74 41.16 3040.24 513.71 7098.48 00:24:30.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.54 14.91 8438.91 6320.16 16970.12 00:24:30.699 ======================================================== 00:24:30.699 Total : 14354.28 56.07 4476.02 513.71 16970.12 00:24:30.699 00:24:30.958 12:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:30.958 12:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:30.958 12:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.958 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.499 Initializing NVMe Controllers 00:24:33.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.499 Controller IO queue size 128, less than required. 00:24:33.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.499 Controller IO queue size 128, less than required. 00:24:33.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.499 Initialization complete. Launching workers. 00:24:33.499 ======================================================== 00:24:33.499 Latency(us) 00:24:33.499 Device Information : IOPS MiB/s Average min max 00:24:33.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1409.41 352.35 92189.17 54100.45 159632.90 00:24:33.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.96 149.49 226218.46 77634.08 358318.71 00:24:33.499 ======================================================== 00:24:33.499 Total : 2007.37 501.84 132114.21 54100.45 358318.71 00:24:33.499 00:24:33.499 12:28:38 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:33.499 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.070 No valid NVMe controllers or AIO or URING devices found 00:24:34.070 Initializing NVMe Controllers 00:24:34.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.070 Controller IO queue size 128, less than required. 00:24:34.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.070 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:34.070 Controller IO queue size 128, less than required. 00:24:34.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.070 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:34.070 WARNING: Some requested NVMe devices were skipped 00:24:34.070 12:28:39 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:34.070 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.608 Initializing NVMe Controllers 00:24:36.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.608 Controller IO queue size 128, less than required. 00:24:36.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.608 Controller IO queue size 128, less than required. 00:24:36.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.608 Initialization complete. Launching workers. 00:24:36.608 00:24:36.608 ==================== 00:24:36.608 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:36.608 TCP transport: 00:24:36.608 polls: 23939 00:24:36.608 idle_polls: 14085 00:24:36.608 sock_completions: 9854 00:24:36.608 nvme_completions: 5397 00:24:36.608 submitted_requests: 8118 00:24:36.608 queued_requests: 1 00:24:36.608 00:24:36.608 ==================== 00:24:36.608 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:36.608 TCP transport: 00:24:36.608 polls: 22156 00:24:36.608 idle_polls: 10983 00:24:36.608 sock_completions: 11173 00:24:36.608 nvme_completions: 8753 00:24:36.608 submitted_requests: 13126 00:24:36.608 queued_requests: 1 00:24:36.608 ======================================================== 00:24:36.608 Latency(us) 00:24:36.608 Device Information : IOPS MiB/s Average min max 00:24:36.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1347.45 336.86 97058.17 60395.20 162697.07 00:24:36.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2185.49 546.37 58707.05 29610.54 97790.42 00:24:36.608 ======================================================== 00:24:36.608 Total : 3532.95 883.24 73334.04 29610.54 162697.07 00:24:36.608 00:24:36.608 12:28:41 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:36.608 12:28:41 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.608 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.608 rmmod nvme_tcp 00:24:36.608 rmmod nvme_fabrics 00:24:36.868 rmmod nvme_keyring 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 757854 ']' 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 757854 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 757854 ']' 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 757854 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 757854 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 757854' 00:24:36.869 killing process with pid 757854 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 757854 00:24:36.869 12:28:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 757854 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.777 12:28:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.319 12:28:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.319 00:24:41.319 real 0m24.888s 00:24:41.319 user 0m58.888s 00:24:41.319 sys 0m8.746s 00:24:41.319 12:28:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:41.319 12:28:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:41.319 ************************************ 00:24:41.319 END TEST nvmf_perf 00:24:41.319 ************************************ 00:24:41.319 12:28:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:41.319 12:28:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:41.319 12:28:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:41.319 12:28:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.319 ************************************ 00:24:41.319 START TEST nvmf_fio_host 00:24:41.319 ************************************ 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:41.319 * Looking for test storage... 00:24:41.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.319 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.320 12:28:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.454 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:49.455 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:49.455 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:49.455 Found net devices under 0000:31:00.0: cvl_0_0 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:49.455 Found net devices under 0000:31:00.1: cvl_0_1 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:49.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:49.455 00:24:49.455 --- 10.0.0.2 ping statistics --- 00:24:49.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.455 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:49.455 00:24:49.455 --- 10.0.0.1 ping statistics --- 00:24:49.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.455 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=765459 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 765459 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 765459 ']' 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:49.455 12:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.455 [2024-06-10 12:28:54.722070] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:24:49.455 [2024-06-10 12:28:54.722135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.455 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.455 [2024-06-10 12:28:54.800719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.455 [2024-06-10 12:28:54.879891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.455 [2024-06-10 12:28:54.879929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.455 [2024-06-10 12:28:54.879937] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.455 [2024-06-10 12:28:54.879943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.455 [2024-06-10 12:28:54.879953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.455 [2024-06-10 12:28:54.880090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.455 [2024-06-10 12:28:54.880215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.455 [2024-06-10 12:28:54.880317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.455 [2024-06-10 12:28:54.880318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.024 12:28:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:50.024 12:28:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:50.024 12:28:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:50.285 [2024-06-10 12:28:55.642967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.285 12:28:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:50.285 12:28:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:50.285 12:28:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.285 12:28:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:50.285 Malloc1 00:24:50.544 12:28:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.544 12:28:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:50.805 12:28:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.805 [2024-06-10 12:28:56.356353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.805 12:28:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:51.149 12:28:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:51.409 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:51.409 fio-3.35 00:24:51.409 Starting 1 thread 00:24:51.409 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.976 00:24:53.976 test: (groupid=0, jobs=1): err= 0: pid=766116: Mon Jun 10 12:28:59 2024 00:24:53.976 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:24:53.976 slat (usec): min=2, max=218, avg= 2.19, stdev= 1.83 00:24:53.976 clat (usec): min=2950, max=8713, avg=5120.95, stdev=667.29 00:24:53.976 lat (usec): min=2981, max=8715, avg=5123.14, stdev=667.32 00:24:53.976 clat percentiles (usec): 00:24:53.976 | 1.00th=[ 4228], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4686], 00:24:53.976 | 30.00th=[ 4817], 40.00th=[ 4883], 50.00th=[ 5014], 60.00th=[ 5080], 00:24:53.976 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5604], 95.00th=[ 6915], 00:24:53.976 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 8225], 99.95th=[ 8455], 00:24:53.976 | 99.99th=[ 8455] 00:24:53.976 bw ( KiB/s): min=47976, max=57360, per=99.94%, avg=54938.00, stdev=4642.11, samples=4 00:24:53.976 iops : min=11994, max=14340, avg=13734.50, stdev=1160.53, samples=4 00:24:53.976 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec); 0 zone resets 00:24:53.976 slat (usec): min=2, max=193, avg= 2.29, stdev= 1.32 00:24:53.976 clat (usec): min=2311, max=7429, avg=4133.88, stdev=554.96 00:24:53.976 lat (usec): min=2329, max=7432, avg=4136.17, stdev=555.00 00:24:53.976 clat percentiles (usec): 00:24:53.976 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:24:53.976 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4113], 00:24:53.976 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 5669], 00:24:53.976 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 6915], 00:24:53.976 | 99.99th=[ 7373] 00:24:53.976 bw ( KiB/s): min=48632, max=57216, per=100.00%, avg=54868.00, stdev=4162.49, samples=4 00:24:53.976 iops : min=12158, max=14304, avg=13717.00, stdev=1040.62, samples=4 00:24:53.976 lat (msec) : 4=23.56%, 10=76.44% 00:24:53.976 cpu : usr=70.99%, sys=26.81%, ctx=45, majf=0, minf=6 00:24:53.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:53.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:53.976 issued rwts: total=27540,27490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:53.976 00:24:53.976 Run status group 0 (all jobs): 00:24:53.976 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:53.976 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:53.976 12:28:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:54.242 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:54.242 fio-3.35 00:24:54.242 Starting 1 thread 00:24:54.242 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.782 00:24:56.782 test: (groupid=0, jobs=1): err= 0: pid=766857: Mon Jun 10 12:29:02 2024 00:24:56.782 read: IOPS=8970, BW=140MiB/s (147MB/s)(286MiB/2037msec) 00:24:56.782 slat (usec): min=3, max=111, avg= 3.67, stdev= 1.65 00:24:56.782 clat (usec): min=1375, max=52668, avg=8817.60, stdev=3857.14 00:24:56.782 lat (usec): min=1379, max=52671, avg=8821.27, stdev=3857.21 00:24:56.782 clat percentiles (usec): 00:24:56.782 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6718], 00:24:56.782 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:24:56.782 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[11600], 00:24:56.782 | 99.00th=[13829], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:24:56.782 | 99.99th=[52691] 00:24:56.782 bw ( KiB/s): min=62848, max=85600, per=49.88%, avg=71592.00, stdev=9774.29, samples=4 00:24:56.782 iops : min= 3928, max= 5350, avg=4474.50, stdev=610.89, samples=4 00:24:56.782 write: IOPS=5414, BW=84.6MiB/s (88.7MB/s)(146MiB/1722msec); 0 zone resets 00:24:56.782 slat (usec): min=40, max=444, avg=41.28, stdev= 8.83 00:24:56.782 clat (usec): min=2471, max=17680, avg=9714.26, stdev=1635.90 00:24:56.782 lat (usec): min=2511, max=17720, avg=9755.54, stdev=1638.01 00:24:56.782 clat percentiles (usec): 00:24:56.782 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8356], 00:24:56.782 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:24:56.782 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:24:56.782 | 99.00th=[14615], 99.50th=[15664], 99.90th=[16909], 99.95th=[17433], 00:24:56.782 | 99.99th=[17695] 00:24:56.782 bw ( KiB/s): min=65664, max=88960, per=86.10%, avg=74584.00, stdev=10018.86, samples=4 00:24:56.782 iops : min= 4104, max= 5560, avg=4661.50, stdev=626.18, samples=4 00:24:56.782 lat (msec) : 2=0.01%, 4=0.34%, 10=70.50%, 20=28.69%, 50=0.30% 00:24:56.782 lat (msec) : 100=0.16% 00:24:56.782 cpu : usr=83.50%, sys=14.39%, ctx=27, majf=0, minf=21 00:24:56.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:56.782 issued rwts: total=18272,9323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:56.782 00:24:56.782 Run status group 0 (all jobs): 00:24:56.782 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=286MiB (299MB), run=2037-2037msec 00:24:56.782 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=146MiB (153MB), run=1722-1722msec 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:56.782 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.783 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:56.783 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.783 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.783 rmmod nvme_tcp 00:24:56.783 rmmod nvme_fabrics 00:24:56.783 rmmod nvme_keyring 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 765459 ']' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 765459 ']' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 765459' 00:24:57.043 killing process with pid 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 765459 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.043 12:29:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.583 12:29:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.583 00:24:59.583 real 0m18.248s 00:24:59.583 user 1m8.741s 00:24:59.583 sys 0m7.940s 00:24:59.583 12:29:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:59.583 12:29:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.583 ************************************ 00:24:59.583 END TEST nvmf_fio_host 00:24:59.583 ************************************ 00:24:59.583 12:29:04 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:59.583 12:29:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:59.583 12:29:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:59.583 12:29:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.583 ************************************ 00:24:59.583 START TEST nvmf_failover 00:24:59.583 ************************************ 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:59.583 * Looking for test storage... 00:24:59.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:59.583 12:29:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:07.721 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:07.721 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:07.721 Found net devices under 0000:31:00.0: cvl_0_0 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:07.721 Found net devices under 0000:31:00.1: cvl_0_1 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.721 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:25:07.722 00:25:07.722 --- 10.0.0.2 ping statistics --- 00:25:07.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.722 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:07.722 00:25:07.722 --- 10.0.0.1 ping statistics --- 00:25:07.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.722 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=771947 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 771947 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 771947 ']' 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:07.722 12:29:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.722 [2024-06-10 12:29:12.967210] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:25:07.722 [2024-06-10 12:29:12.967275] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.722 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.722 [2024-06-10 12:29:13.062241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:07.722 [2024-06-10 12:29:13.155848] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.722 [2024-06-10 12:29:13.155909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.722 [2024-06-10 12:29:13.155918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.722 [2024-06-10 12:29:13.155925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.722 [2024-06-10 12:29:13.155931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.722 [2024-06-10 12:29:13.156060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.722 [2024-06-10 12:29:13.156247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.722 [2024-06-10 12:29:13.156247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.290 12:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:08.550 [2024-06-10 12:29:13.930679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.550 12:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:08.550 Malloc0 00:25:08.550 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.810 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:09.071 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.071 [2024-06-10 12:29:14.625741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.071 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:09.331 [2024-06-10 12:29:14.790213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.331 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.591 [2024-06-10 12:29:14.950716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=772312 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 772312 /var/tmp/bdevperf.sock 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 772312 ']' 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:09.591 12:29:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.531 12:29:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:10.531 12:29:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:10.531 12:29:15 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.531 NVMe0n1 00:25:10.531 12:29:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.101 00:25:11.101 12:29:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=772654 00:25:11.101 12:29:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:11.101 12:29:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.041 12:29:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.041 [2024-06-10 12:29:17.608535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608585] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608632] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608713] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.041 [2024-06-10 12:29:17.608721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.608999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.609003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.609007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 [2024-06-10 12:29:17.609012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddb2b0 is same with the state(5) to be set 00:25:12.042 12:29:17 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:15.404 12:29:20 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:15.665 00:25:15.665 12:29:21 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.665 [2024-06-10 12:29:21.217716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.665 [2024-06-10 12:29:21.217939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217983] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.217997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.218001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 [2024-06-10 12:29:21.218005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddc130 is same with the state(5) to be set 00:25:15.666 12:29:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:18.961 12:29:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.961 [2024-06-10 12:29:24.390016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.961 12:29:24 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:19.901 12:29:25 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.161 [2024-06-10 12:29:25.565201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 [2024-06-10 12:29:25.565418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd010 is same with the state(5) to be set 00:25:20.161 12:29:25 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 772654 00:25:26.755 0 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 772312 ']' 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 772312' 00:25:26.755 killing process with pid 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 772312 00:25:26.755 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.755 [2024-06-10 12:29:15.016513] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:25:26.755 [2024-06-10 12:29:15.016567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772312 ] 00:25:26.755 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.755 [2024-06-10 12:29:15.082233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.755 [2024-06-10 12:29:15.146093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.756 Running I/O for 15 seconds... 00:25:26.756 [2024-06-10 12:29:17.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.611988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.611997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.756 [2024-06-10 12:29:17.612229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.756 [2024-06-10 12:29:17.612386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.756 [2024-06-10 12:29:17.612393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.612984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.612993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.613001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.757 [2024-06-10 12:29:17.613017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.613388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.613394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.613400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.613407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.624637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.624647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.624657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.757 [2024-06-10 12:29:17.624670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.757 [2024-06-10 12:29:17.624681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:25:26.757 [2024-06-10 12:29:17.624689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624728] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2180670 was disconnected and freed. reset controller. 00:25:26.757 [2024-06-10 12:29:17.624738] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:26.757 [2024-06-10 12:29:17.624765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.757 [2024-06-10 12:29:17.624774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.757 [2024-06-10 12:29:17.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.757 [2024-06-10 12:29:17.624806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.757 [2024-06-10 12:29:17.624821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:17.624829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.757 [2024-06-10 12:29:17.624877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215fa90 (9): Bad file descriptor 00:25:26.757 [2024-06-10 12:29:17.628438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.757 [2024-06-10 12:29:17.662017] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.757 [2024-06-10 12:29:21.219068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.757 [2024-06-10 12:29:21.219105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:21.219121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.757 [2024-06-10 12:29:21.219129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:21.219139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.757 [2024-06-10 12:29:21.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.757 [2024-06-10 12:29:21.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.757 [2024-06-10 12:29:21.219162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.219985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.219993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.758 [2024-06-10 12:29:21.220090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.758 [2024-06-10 12:29:21.220258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.758 [2024-06-10 12:29:21.220267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.220983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.220993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.759 [2024-06-10 12:29:21.221185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.759 [2024-06-10 12:29:21.221223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.759 [2024-06-10 12:29:21.221230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:25:26.759 [2024-06-10 12:29:21.221238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221274] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2182920 was disconnected and freed. reset controller. 00:25:26.759 [2024-06-10 12:29:21.221283] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:26.759 [2024-06-10 12:29:21.221302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.759 [2024-06-10 12:29:21.221310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.759 [2024-06-10 12:29:21.221325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.759 [2024-06-10 12:29:21.221342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.759 [2024-06-10 12:29:21.221358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:21.221366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.759 [2024-06-10 12:29:21.221398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215fa90 (9): Bad file descriptor 00:25:26.759 [2024-06-10 12:29:21.224933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.759 [2024-06-10 12:29:21.299878] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.759 [2024-06-10 12:29:25.567031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.759 [2024-06-10 12:29:25.567258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.759 [2024-06-10 12:29:25.567267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.760 [2024-06-10 12:29:25.567878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.567983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.567990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.760 [2024-06-10 12:29:25.568388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.760 [2024-06-10 12:29:25.568396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.761 [2024-06-10 12:29:25.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56344 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56352 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56360 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56392 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56400 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56408 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56416 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.568977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56424 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.568991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.568996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56432 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56440 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56448 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56456 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56464 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56472 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56480 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56488 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56496 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56504 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56512 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56520 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56528 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56536 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56544 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.569377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.569384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.569389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.569395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56552 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.580395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.580435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.580442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56560 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.580450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.580462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.580469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56568 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.580476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.580489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.580495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56576 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.580502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.761 [2024-06-10 12:29:25.580514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.761 [2024-06-10 12:29:25.580520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56584 len:8 PRP1 0x0 PRP2 0x0 00:25:26.761 [2024-06-10 12:29:25.580528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580569] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2182740 was disconnected and freed. reset controller. 00:25:26.761 [2024-06-10 12:29:25.580583] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:26.761 [2024-06-10 12:29:25.580610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.761 [2024-06-10 12:29:25.580618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.761 [2024-06-10 12:29:25.580637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.761 [2024-06-10 12:29:25.580652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.761 [2024-06-10 12:29:25.580660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.761 [2024-06-10 12:29:25.580667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.762 [2024-06-10 12:29:25.580674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.762 [2024-06-10 12:29:25.580713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215fa90 (9): Bad file descriptor 00:25:26.762 [2024-06-10 12:29:25.584262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.762 [2024-06-10 12:29:25.788941] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.762 00:25:26.762 Latency(us) 00:25:26.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.762 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:26.762 Verification LBA range: start 0x0 length 0x4000 00:25:26.762 NVMe0n1 : 15.01 11191.65 43.72 729.52 0.00 10709.77 512.00 21189.97 00:25:26.762 =================================================================================================================== 00:25:26.762 Total : 11191.65 43.72 729.52 0.00 10709.77 512.00 21189.97 00:25:26.762 Received shutdown signal, test time was about 15.000000 seconds 00:25:26.762 00:25:26.762 Latency(us) 00:25:26.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.762 =================================================================================================================== 00:25:26.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=775667 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 775667 /var/tmp/bdevperf.sock 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 775667 ']' 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:26.762 12:29:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:27.332 12:29:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:27.332 12:29:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:27.332 12:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:27.332 [2024-06-10 12:29:32.763284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:27.332 12:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:27.332 [2024-06-10 12:29:32.931683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:27.593 12:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.854 NVMe0n1 00:25:27.854 12:29:33 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.424 00:25:28.424 12:29:33 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.424 00:25:28.424 12:29:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.424 12:29:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:28.683 12:29:34 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.943 12:29:34 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:32.245 12:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.245 12:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:32.245 12:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.245 12:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=776683 00:25:32.245 12:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 776683 00:25:33.187 0 00:25:33.187 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.187 [2024-06-10 12:29:31.865045] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:25:33.187 [2024-06-10 12:29:31.865111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775667 ] 00:25:33.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.187 [2024-06-10 12:29:31.931080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.187 [2024-06-10 12:29:31.992966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.187 [2024-06-10 12:29:34.288694] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.187 [2024-06-10 12:29:34.288739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.187 [2024-06-10 12:29:34.288750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.187 [2024-06-10 12:29:34.288759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.187 [2024-06-10 12:29:34.288766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.187 [2024-06-10 12:29:34.288774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.187 [2024-06-10 12:29:34.288782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.187 [2024-06-10 12:29:34.288790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.187 [2024-06-10 12:29:34.288799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.187 [2024-06-10 12:29:34.288806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.187 [2024-06-10 12:29:34.288832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.187 [2024-06-10 12:29:34.288848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x650a90 (9): Bad file descriptor 00:25:33.187 [2024-06-10 12:29:34.302327] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:33.187 Running I/O for 1 seconds... 00:25:33.187 00:25:33.187 Latency(us) 00:25:33.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.187 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.187 Verification LBA range: start 0x0 length 0x4000 00:25:33.187 NVMe0n1 : 1.00 11408.49 44.56 0.00 0.00 11168.76 2252.80 16165.55 00:25:33.187 =================================================================================================================== 00:25:33.187 Total : 11408.49 44.56 0.00 0.00 11168.76 2252.80 16165.55 00:25:33.187 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.187 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:33.187 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.447 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.447 12:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:33.707 12:29:39 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.707 12:29:39 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 775667 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 775667 ']' 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 775667 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 775667 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 775667' 00:25:37.004 killing process with pid 775667 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 775667 00:25:37.004 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 775667 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.264 rmmod nvme_tcp 00:25:37.264 rmmod nvme_fabrics 00:25:37.264 rmmod nvme_keyring 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 771947 ']' 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 771947 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 771947 ']' 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 771947 00:25:37.264 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 771947 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 771947' 00:25:37.524 killing process with pid 771947 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 771947 00:25:37.524 12:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 771947 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.524 12:29:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.119 12:29:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:40.119 00:25:40.119 real 0m40.358s 00:25:40.119 user 2m2.490s 00:25:40.119 sys 0m8.571s 00:25:40.119 12:29:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:40.119 12:29:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.119 ************************************ 00:25:40.119 END TEST nvmf_failover 00:25:40.119 ************************************ 00:25:40.119 12:29:45 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:40.119 12:29:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:40.119 12:29:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:40.119 12:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.119 ************************************ 00:25:40.119 START TEST nvmf_host_discovery 00:25:40.119 ************************************ 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:40.119 * Looking for test storage... 00:25:40.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.119 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.120 12:29:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:48.267 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:48.267 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.267 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:48.268 Found net devices under 0000:31:00.0: cvl_0_0 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:48.268 Found net devices under 0000:31:00.1: cvl_0_1 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:25:48.268 00:25:48.268 --- 10.0.0.2 ping statistics --- 00:25:48.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.268 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:25:48.268 00:25:48.268 --- 10.0.0.1 ping statistics --- 00:25:48.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.268 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=782372 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 782372 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 782372 ']' 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:48.268 12:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.268 [2024-06-10 12:29:53.606547] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:25:48.268 [2024-06-10 12:29:53.606613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.268 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.268 [2024-06-10 12:29:53.699587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.268 [2024-06-10 12:29:53.792706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.268 [2024-06-10 12:29:53.792765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.268 [2024-06-10 12:29:53.792773] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.268 [2024-06-10 12:29:53.792780] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.268 [2024-06-10 12:29:53.792786] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.268 [2024-06-10 12:29:53.792813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.840 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.841 [2024-06-10 12:29:54.412726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.841 [2024-06-10 12:29:54.424857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.841 null0 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.841 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.100 null1 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=782706 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 782706 /tmp/host.sock 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 782706 ']' 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:49.100 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:49.100 12:29:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.100 [2024-06-10 12:29:54.523908] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:25:49.100 [2024-06-10 12:29:54.523972] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782706 ] 00:25:49.100 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.100 [2024-06-10 12:29:54.579041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.100 [2024-06-10 12:29:54.632870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.039 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 [2024-06-10 12:29:55.652036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:25:50.299 12:29:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:50.869 [2024-06-10 12:29:56.311092] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:50.869 [2024-06-10 12:29:56.311115] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:50.869 [2024-06-10 12:29:56.311128] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:50.869 [2024-06-10 12:29:56.400400] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:51.130 [2024-06-10 12:29:56.502937] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.130 [2024-06-10 12:29:56.502958] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:51.391 12:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.652 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.653 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.653 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.653 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.653 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.914 [2024-06-10 12:29:57.368507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.914 [2024-06-10 12:29:57.368891] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:51.914 [2024-06-10 12:29:57.368919] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.914 [2024-06-10 12:29:57.497692] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:51.914 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.176 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:52.176 12:29:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:52.176 [2024-06-10 12:29:57.597382] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:52.176 [2024-06-10 12:29:57.597399] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.176 [2024-06-10 12:29:57.597404] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.118 [2024-06-10 12:29:58.644500] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:53.118 [2024-06-10 12:29:58.644521] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.118 [2024-06-10 12:29:58.646867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.118 [2024-06-10 12:29:58.646885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.118 [2024-06-10 12:29:58.646896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.118 [2024-06-10 12:29:58.646903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.118 [2024-06-10 12:29:58.646911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.118 [2024-06-10 12:29:58.646919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.118 [2024-06-10 12:29:58.646926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.118 [2024-06-10 12:29:58.646934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.118 [2024-06-10 12:29:58.646941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.118 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.118 [2024-06-10 12:29:58.656879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.666920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.667179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.667197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.667206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.667218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.667229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.667236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.667244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.667255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.119 [2024-06-10 12:29:58.676975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.677412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.677449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.677460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.677479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.677491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.677498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.677506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.677521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.119 [2024-06-10 12:29:58.687026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.687476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.687513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.687525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.687544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.687558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.687566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.687575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.687591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.119 [2024-06-10 12:29:58.697081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.697561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.697598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.697609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.697628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.697640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.697647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.697655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.697669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:53.119 [2024-06-10 12:29:58.707138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.707489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.707502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.707510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.707521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.707531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.707537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.707544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.707555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.119 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.119 [2024-06-10 12:29:58.717198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.119 [2024-06-10 12:29:58.717546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.119 [2024-06-10 12:29:58.717558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.119 [2024-06-10 12:29:58.717566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.119 [2024-06-10 12:29:58.717577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.119 [2024-06-10 12:29:58.717587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.119 [2024-06-10 12:29:58.717594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.119 [2024-06-10 12:29:58.717601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.119 [2024-06-10 12:29:58.717611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.381 [2024-06-10 12:29:58.727251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.381 [2024-06-10 12:29:58.727619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-06-10 12:29:58.727630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d050 with addr=10.0.0.2, port=4420 00:25:53.381 [2024-06-10 12:29:58.727637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d050 is same with the state(5) to be set 00:25:53.381 [2024-06-10 12:29:58.727648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d050 (9): Bad file descriptor 00:25:53.381 [2024-06-10 12:29:58.727663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:53.381 [2024-06-10 12:29:58.727669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:53.381 [2024-06-10 12:29:58.727676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:53.381 [2024-06-10 12:29:58.727686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.381 [2024-06-10 12:29:58.732813] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:53.381 [2024-06-10 12:29:58.732830] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.381 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.382 12:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.643 12:29:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.585 [2024-06-10 12:30:00.071178] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.585 [2024-06-10 12:30:00.071201] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.585 [2024-06-10 12:30:00.071214] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.847 [2024-06-10 12:30:00.202614] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:55.109 [2024-06-10 12:30:00.512328] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.109 [2024-06-10 12:30:00.512359] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.109 request: 00:25:55.109 { 00:25:55.109 "name": "nvme", 00:25:55.109 "trtype": "tcp", 00:25:55.109 "traddr": "10.0.0.2", 00:25:55.109 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.109 "adrfam": "ipv4", 00:25:55.109 "trsvcid": "8009", 00:25:55.109 "wait_for_attach": true, 00:25:55.109 "method": "bdev_nvme_start_discovery", 00:25:55.109 "req_id": 1 00:25:55.109 } 00:25:55.109 Got JSON-RPC error response 00:25:55.109 response: 00:25:55.109 { 00:25:55.109 "code": -17, 00:25:55.109 "message": "File exists" 00:25:55.109 } 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.109 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.110 request: 00:25:55.110 { 00:25:55.110 "name": "nvme_second", 00:25:55.110 "trtype": "tcp", 00:25:55.110 "traddr": "10.0.0.2", 00:25:55.110 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.110 "adrfam": "ipv4", 00:25:55.110 "trsvcid": "8009", 00:25:55.110 "wait_for_attach": true, 00:25:55.110 "method": "bdev_nvme_start_discovery", 00:25:55.110 "req_id": 1 00:25:55.110 } 00:25:55.110 Got JSON-RPC error response 00:25:55.110 response: 00:25:55.110 { 00:25:55.110 "code": -17, 00:25:55.110 "message": "File exists" 00:25:55.110 } 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.110 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.371 12:30:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.315 [2024-06-10 12:30:01.771825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-06-10 12:30:01.771856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1019100 with addr=10.0.0.2, port=8010 00:25:56.315 [2024-06-10 12:30:01.771870] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:56.315 [2024-06-10 12:30:01.771877] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:56.315 [2024-06-10 12:30:01.771885] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:57.257 [2024-06-10 12:30:02.774186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.257 [2024-06-10 12:30:02.774212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1019100 with addr=10.0.0.2, port=8010 00:25:57.257 [2024-06-10 12:30:02.774228] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:57.257 [2024-06-10 12:30:02.774235] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:57.257 [2024-06-10 12:30:02.774242] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:58.198 [2024-06-10 12:30:03.776166] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:58.198 request: 00:25:58.198 { 00:25:58.198 "name": "nvme_second", 00:25:58.198 "trtype": "tcp", 00:25:58.198 "traddr": "10.0.0.2", 00:25:58.198 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.198 "adrfam": "ipv4", 00:25:58.198 "trsvcid": "8010", 00:25:58.198 "attach_timeout_ms": 3000, 00:25:58.198 "method": "bdev_nvme_start_discovery", 00:25:58.198 "req_id": 1 00:25:58.198 } 00:25:58.198 Got JSON-RPC error response 00:25:58.198 response: 00:25:58.198 { 00:25:58.198 "code": -110, 00:25:58.198 "message": "Connection timed out" 00:25:58.198 } 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.198 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 782706 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.459 rmmod nvme_tcp 00:25:58.459 rmmod nvme_fabrics 00:25:58.459 rmmod nvme_keyring 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 782372 ']' 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 782372 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 782372 ']' 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 782372 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 782372 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 782372' 00:25:58.459 killing process with pid 782372 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 782372 00:25:58.459 12:30:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 782372 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.723 12:30:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.703 12:30:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.703 00:26:00.703 real 0m20.951s 00:26:00.703 user 0m23.787s 00:26:00.703 sys 0m7.561s 00:26:00.703 12:30:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:00.703 12:30:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.703 ************************************ 00:26:00.703 END TEST nvmf_host_discovery 00:26:00.703 ************************************ 00:26:00.703 12:30:06 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:00.703 12:30:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:00.703 12:30:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:00.703 12:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.703 ************************************ 00:26:00.703 START TEST nvmf_host_multipath_status 00:26:00.703 ************************************ 00:26:00.703 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:00.964 * Looking for test storage... 00:26:00.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.964 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.965 12:30:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:09.104 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:09.104 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.104 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:09.105 Found net devices under 0000:31:00.0: cvl_0_0 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:09.105 Found net devices under 0000:31:00.1: cvl_0_1 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:09.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:26:09.105 00:26:09.105 --- 10.0.0.2 ping statistics --- 00:26:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.105 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:26:09.105 00:26:09.105 --- 10.0.0.1 ping statistics --- 00:26:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.105 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=789811 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 789811 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 789811 ']' 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:09.105 12:30:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.105 [2024-06-10 12:30:14.523781] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:26:09.105 [2024-06-10 12:30:14.523829] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.105 [2024-06-10 12:30:14.596275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:09.105 [2024-06-10 12:30:14.660028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.105 [2024-06-10 12:30:14.660065] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.105 [2024-06-10 12:30:14.660072] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.105 [2024-06-10 12:30:14.660078] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.105 [2024-06-10 12:30:14.660083] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.105 [2024-06-10 12:30:14.660242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.105 [2024-06-10 12:30:14.660257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=789811 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:10.055 [2024-06-10 12:30:15.459361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:10.055 Malloc0 00:26:10.055 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:10.318 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:10.578 12:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.578 [2024-06-10 12:30:16.080814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.578 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:10.838 [2024-06-10 12:30:16.233158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=790171 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 790171 /var/tmp/bdevperf.sock 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 790171 ']' 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:10.838 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:11.098 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:11.098 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:11.098 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:11.098 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:11.357 Nvme0n1 00:26:11.357 12:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:11.927 Nvme0n1 00:26:11.927 12:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:11.927 12:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:13.839 12:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:13.839 12:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:14.099 12:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:14.099 12:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.482 12:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.482 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.483 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.483 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.483 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.744 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.744 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.744 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.744 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.004 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.005 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.265 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.265 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:16.265 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.526 12:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.526 12:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:17.467 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:17.467 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.467 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.467 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.728 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.728 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.728 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.728 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.989 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.249 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.513 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.513 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:18.513 12:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:18.774 12:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:18.774 12:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.159 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.420 12:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.709 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.709 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.709 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.709 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.969 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.969 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:20.969 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.969 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:21.228 12:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:22.166 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:22.166 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.166 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.166 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.425 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.425 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:22.425 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.425 12:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.425 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.425 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.425 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.425 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.685 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.685 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.685 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.685 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.945 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.205 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.205 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:23.205 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:23.465 12:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:23.465 12:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.849 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.110 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.369 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.369 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:25.369 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.369 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.629 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.629 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:25.629 12:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:25.629 12:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:25.888 12:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:26.827 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:26.827 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:26.827 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.827 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.087 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.349 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.349 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.349 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.349 12:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.610 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.871 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.871 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:28.132 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:28.132 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:28.132 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:28.392 12:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:29.332 12:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:29.332 12:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.332 12:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.332 12:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.594 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.594 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:29.594 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.594 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.855 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.115 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.115 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.115 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.116 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:30.377 12:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:30.637 12:30:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.637 12:30:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.023 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.285 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.285 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.285 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.285 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.546 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.546 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:32.547 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.547 12:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.547 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.547 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.547 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.547 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.808 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.808 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:32.808 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:33.068 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:33.068 12:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.453 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.454 12:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.714 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.714 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.714 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.714 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.974 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.234 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.234 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:35.234 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.494 12:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:35.494 12:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:36.469 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:36.469 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.469 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.469 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.729 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.729 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:36.729 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.729 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.989 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.248 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.248 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.248 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.248 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.509 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.509 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:37.509 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.509 12:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 790171 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 790171 ']' 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 790171 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 790171 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 790171' 00:26:37.509 killing process with pid 790171 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 790171 00:26:37.509 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 790171 00:26:37.773 Connection closed with partial response: 00:26:37.773 00:26:37.773 00:26:37.773 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 790171 00:26:37.773 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:37.773 [2024-06-10 12:30:16.268960] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:26:37.773 [2024-06-10 12:30:16.269005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790171 ] 00:26:37.773 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.773 [2024-06-10 12:30:16.316337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.773 [2024-06-10 12:30:16.368230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.773 Running I/O for 90 seconds... 00:26:37.773 [2024-06-10 12:30:28.850161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.773 [2024-06-10 12:30:28.850199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:37.773 [2024-06-10 12:30:28.850993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.773 [2024-06-10 12:30:28.850998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:37.774 [2024-06-10 12:30:28.851858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.774 [2024-06-10 12:30:28.851863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.851882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.851900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.851919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.851940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.851958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.851977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.851990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.852046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.852066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.852087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.775 [2024-06-10 12:30:28.852106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:37.775 [2024-06-10 12:30:28.852911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.775 [2024-06-10 12:30:28.852916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.852932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.852937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.852952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.852958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.852975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.852979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.852995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:28.853954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:28.853960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.776 [2024-06-10 12:30:40.990287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:37.776 [2024-06-10 12:30:40.990731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.777 [2024-06-10 12:30:40.990741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.777 [2024-06-10 12:30:40.990758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.990774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.990789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.990804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.990820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.990835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.777 [2024-06-10 12:30:40.990850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.777 [2024-06-10 12:30:40.990986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.990997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.777 [2024-06-10 12:30:40.991004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:37.777 [2024-06-10 12:30:40.991015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.777 [2024-06-10 12:30:40.991020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:37.777 Received shutdown signal, test time was about 25.623594 seconds 00:26:37.777 00:26:37.777 Latency(us) 00:26:37.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.777 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:37.777 Verification LBA range: start 0x0 length 0x4000 00:26:37.777 Nvme0n1 : 25.62 10936.63 42.72 0.00 0.00 11685.64 145.07 3019898.88 00:26:37.777 =================================================================================================================== 00:26:37.777 Total : 10936.63 42.72 0.00 0.00 11685.64 145.07 3019898.88 00:26:37.777 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.038 rmmod nvme_tcp 00:26:38.038 rmmod nvme_fabrics 00:26:38.038 rmmod nvme_keyring 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 789811 ']' 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 789811 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 789811 ']' 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 789811 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 789811 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 789811' 00:26:38.038 killing process with pid 789811 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 789811 00:26:38.038 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 789811 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.298 12:30:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.213 12:30:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.213 00:26:40.213 real 0m39.475s 00:26:40.213 user 1m39.384s 00:26:40.213 sys 0m11.288s 00:26:40.213 12:30:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:40.213 12:30:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.213 ************************************ 00:26:40.213 END TEST nvmf_host_multipath_status 00:26:40.213 ************************************ 00:26:40.213 12:30:45 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:40.213 12:30:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:40.213 12:30:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:40.213 12:30:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.213 ************************************ 00:26:40.213 START TEST nvmf_discovery_remove_ifc 00:26:40.213 ************************************ 00:26:40.213 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:40.475 * Looking for test storage... 00:26:40.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.475 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.476 12:30:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.623 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:48.624 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:48.624 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:48.624 Found net devices under 0000:31:00.0: cvl_0_0 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:48.624 Found net devices under 0000:31:00.1: cvl_0_1 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:48.624 12:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:48.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:26:48.624 00:26:48.624 --- 10.0.0.2 ping statistics --- 00:26:48.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.624 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:26:48.624 00:26:48.624 --- 10.0.0.1 ping statistics --- 00:26:48.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.624 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=800393 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 800393 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 800393 ']' 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:48.624 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.624 [2024-06-10 12:30:54.192461] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:26:48.624 [2024-06-10 12:30:54.192524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.886 [2024-06-10 12:30:54.287252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.886 [2024-06-10 12:30:54.379278] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.886 [2024-06-10 12:30:54.379335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.886 [2024-06-10 12:30:54.379343] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.886 [2024-06-10 12:30:54.379350] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.886 [2024-06-10 12:30:54.379356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.886 [2024-06-10 12:30:54.379388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.462 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:49.462 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:49.462 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.462 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:49.462 12:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.462 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.462 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:49.462 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.462 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.462 [2024-06-10 12:30:55.030639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.462 [2024-06-10 12:30:55.038841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:49.462 null0 00:26:49.723 [2024-06-10 12:30:55.070832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=800440 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 800440 /tmp/host.sock 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 800440 ']' 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:49.723 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:49.723 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.723 [2024-06-10 12:30:55.146822] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:26:49.723 [2024-06-10 12:30:55.146891] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800440 ] 00:26:49.723 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.724 [2024-06-10 12:30:55.210385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.724 [2024-06-10 12:30:55.276860] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.664 12:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.606 [2024-06-10 12:30:56.999207] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.606 [2024-06-10 12:30:56.999229] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.606 [2024-06-10 12:30:56.999243] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.606 [2024-06-10 12:30:57.126665] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:51.606 [2024-06-10 12:30:57.190116] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:51.606 [2024-06-10 12:30:57.190163] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:51.606 [2024-06-10 12:30:57.190186] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:51.606 [2024-06-10 12:30:57.190209] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:51.606 [2024-06-10 12:30:57.190229] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.606 [2024-06-10 12:30:57.198316] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf0c850 was disconnected and freed. delete nvme_qpair. 00:26:51.606 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.866 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.866 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:51.866 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.867 12:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.248 12:30:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.187 12:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.127 12:31:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:56.067 12:31:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.488 [2024-06-10 12:31:02.630541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:57.488 [2024-06-10 12:31:02.630588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.488 [2024-06-10 12:31:02.630600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.488 [2024-06-10 12:31:02.630611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.488 [2024-06-10 12:31:02.630618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.488 [2024-06-10 12:31:02.630631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.488 [2024-06-10 12:31:02.630638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.488 [2024-06-10 12:31:02.630646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.488 [2024-06-10 12:31:02.630653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.488 [2024-06-10 12:31:02.630661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.488 [2024-06-10 12:31:02.630668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.488 [2024-06-10 12:31:02.630676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3bd0 is same with the state(5) to be set 00:26:57.488 [2024-06-10 12:31:02.640560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed3bd0 (9): Bad file descriptor 00:26:57.488 [2024-06-10 12:31:02.650603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.488 12:31:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.427 [2024-06-10 12:31:03.707210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:58.427 [2024-06-10 12:31:03.707248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed3bd0 with addr=10.0.0.2, port=4420 00:26:58.427 [2024-06-10 12:31:03.707259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3bd0 is same with the state(5) to be set 00:26:58.427 [2024-06-10 12:31:03.707282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed3bd0 (9): Bad file descriptor 00:26:58.427 [2024-06-10 12:31:03.707615] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:58.427 [2024-06-10 12:31:03.707633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:58.427 [2024-06-10 12:31:03.707640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:58.427 [2024-06-10 12:31:03.707649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:58.427 [2024-06-10 12:31:03.707664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.427 [2024-06-10 12:31:03.707672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:58.427 12:31:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.427 12:31:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.427 12:31:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.367 [2024-06-10 12:31:04.710050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.367 [2024-06-10 12:31:04.710082] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:59.367 [2024-06-10 12:31:04.710108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.367 [2024-06-10 12:31:04.710118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.367 [2024-06-10 12:31:04.710127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.367 [2024-06-10 12:31:04.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.367 [2024-06-10 12:31:04.710142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.367 [2024-06-10 12:31:04.710149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.367 [2024-06-10 12:31:04.710157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.367 [2024-06-10 12:31:04.710163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.367 [2024-06-10 12:31:04.710171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.367 [2024-06-10 12:31:04.710178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.367 [2024-06-10 12:31:04.710185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:59.367 [2024-06-10 12:31:04.710539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed3060 (9): Bad file descriptor 00:26:59.367 [2024-06-10 12:31:04.711550] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:59.367 [2024-06-10 12:31:04.711560] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:59.367 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:59.368 12:31:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.748 12:31:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.748 12:31:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:00.748 12:31:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.316 [2024-06-10 12:31:06.764357] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.316 [2024-06-10 12:31:06.764378] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.316 [2024-06-10 12:31:06.764393] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.316 [2024-06-10 12:31:06.852683] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:01.316 [2024-06-10 12:31:06.912483] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:01.316 [2024-06-10 12:31:06.912521] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:01.316 [2024-06-10 12:31:06.912542] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:01.316 [2024-06-10 12:31:06.912557] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:01.316 [2024-06-10 12:31:06.912565] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.577 [2024-06-10 12:31:06.922059] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xee08f0 was disconnected and freed. delete nvme_qpair. 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 800440 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 800440 ']' 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 800440 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 800440 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 800440' 00:27:01.577 killing process with pid 800440 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 800440 00:27:01.577 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 800440 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.837 rmmod nvme_tcp 00:27:01.837 rmmod nvme_fabrics 00:27:01.837 rmmod nvme_keyring 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 800393 ']' 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 800393 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 800393 ']' 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 800393 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 800393 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 800393' 00:27:01.837 killing process with pid 800393 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 800393 00:27:01.837 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 800393 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.098 12:31:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.011 12:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.011 00:27:04.011 real 0m23.781s 00:27:04.011 user 0m27.161s 00:27:04.011 sys 0m7.396s 00:27:04.011 12:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:04.011 12:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.011 ************************************ 00:27:04.011 END TEST nvmf_discovery_remove_ifc 00:27:04.011 ************************************ 00:27:04.272 12:31:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:04.272 12:31:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:04.272 12:31:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:04.272 12:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.272 ************************************ 00:27:04.272 START TEST nvmf_identify_kernel_target 00:27:04.272 ************************************ 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:04.272 * Looking for test storage... 00:27:04.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.272 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.273 12:31:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:12.415 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:12.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:12.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:12.416 Found net devices under 0000:31:00.0: cvl_0_0 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:12.416 Found net devices under 0000:31:00.1: cvl_0_1 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.416 12:31:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:27:12.680 00:27:12.680 --- 10.0.0.2 ping statistics --- 00:27:12.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.680 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:27:12.680 00:27:12.680 --- 10.0.0.1 ping statistics --- 00:27:12.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.680 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.680 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:12.681 12:31:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:16.890 Waiting for block devices as requested 00:27:16.890 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.890 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:17.150 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:17.150 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:17.150 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:17.411 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:17.411 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:17.411 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:17.411 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:17.671 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:17.671 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:17.671 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:17.671 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:17.671 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.671 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:17.671 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:17.671 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.672 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:17.672 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:17.672 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:17.672 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:17.934 No valid GPT data, bailing 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:27:17.934 00:27:17.934 Discovery Log Number of Records 2, Generation counter 2 00:27:17.934 =====Discovery Log Entry 0====== 00:27:17.934 trtype: tcp 00:27:17.934 adrfam: ipv4 00:27:17.934 subtype: current discovery subsystem 00:27:17.934 treq: not specified, sq flow control disable supported 00:27:17.934 portid: 1 00:27:17.934 trsvcid: 4420 00:27:17.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:17.934 traddr: 10.0.0.1 00:27:17.934 eflags: none 00:27:17.934 sectype: none 00:27:17.934 =====Discovery Log Entry 1====== 00:27:17.934 trtype: tcp 00:27:17.934 adrfam: ipv4 00:27:17.934 subtype: nvme subsystem 00:27:17.934 treq: not specified, sq flow control disable supported 00:27:17.934 portid: 1 00:27:17.934 trsvcid: 4420 00:27:17.934 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:17.934 traddr: 10.0.0.1 00:27:17.934 eflags: none 00:27:17.934 sectype: none 00:27:17.934 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:17.934 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:17.934 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.934 ===================================================== 00:27:17.934 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:17.934 ===================================================== 00:27:17.934 Controller Capabilities/Features 00:27:17.934 ================================ 00:27:17.934 Vendor ID: 0000 00:27:17.934 Subsystem Vendor ID: 0000 00:27:17.934 Serial Number: c726624b9befd48fbbc3 00:27:17.934 Model Number: Linux 00:27:17.934 Firmware Version: 6.7.0-68 00:27:17.934 Recommended Arb Burst: 0 00:27:17.934 IEEE OUI Identifier: 00 00 00 00:27:17.934 Multi-path I/O 00:27:17.934 May have multiple subsystem ports: No 00:27:17.934 May have multiple controllers: No 00:27:17.934 Associated with SR-IOV VF: No 00:27:17.934 Max Data Transfer Size: Unlimited 00:27:17.934 Max Number of Namespaces: 0 00:27:17.934 Max Number of I/O Queues: 1024 00:27:17.934 NVMe Specification Version (VS): 1.3 00:27:17.934 NVMe Specification Version (Identify): 1.3 00:27:17.934 Maximum Queue Entries: 1024 00:27:17.934 Contiguous Queues Required: No 00:27:17.934 Arbitration Mechanisms Supported 00:27:17.934 Weighted Round Robin: Not Supported 00:27:17.934 Vendor Specific: Not Supported 00:27:17.934 Reset Timeout: 7500 ms 00:27:17.934 Doorbell Stride: 4 bytes 00:27:17.934 NVM Subsystem Reset: Not Supported 00:27:17.934 Command Sets Supported 00:27:17.934 NVM Command Set: Supported 00:27:17.934 Boot Partition: Not Supported 00:27:17.934 Memory Page Size Minimum: 4096 bytes 00:27:17.934 Memory Page Size Maximum: 4096 bytes 00:27:17.935 Persistent Memory Region: Not Supported 00:27:17.935 Optional Asynchronous Events Supported 00:27:17.935 Namespace Attribute Notices: Not Supported 00:27:17.935 Firmware Activation Notices: Not Supported 00:27:17.935 ANA Change Notices: Not Supported 00:27:17.935 PLE Aggregate Log Change Notices: Not Supported 00:27:17.935 LBA Status Info Alert Notices: Not Supported 00:27:17.935 EGE Aggregate Log Change Notices: Not Supported 00:27:17.935 Normal NVM Subsystem Shutdown event: Not Supported 00:27:17.935 Zone Descriptor Change Notices: Not Supported 00:27:17.935 Discovery Log Change Notices: Supported 00:27:17.935 Controller Attributes 00:27:17.935 128-bit Host Identifier: Not Supported 00:27:17.935 Non-Operational Permissive Mode: Not Supported 00:27:17.935 NVM Sets: Not Supported 00:27:17.935 Read Recovery Levels: Not Supported 00:27:17.935 Endurance Groups: Not Supported 00:27:17.935 Predictable Latency Mode: Not Supported 00:27:17.935 Traffic Based Keep ALive: Not Supported 00:27:17.935 Namespace Granularity: Not Supported 00:27:17.935 SQ Associations: Not Supported 00:27:17.935 UUID List: Not Supported 00:27:17.935 Multi-Domain Subsystem: Not Supported 00:27:17.935 Fixed Capacity Management: Not Supported 00:27:17.935 Variable Capacity Management: Not Supported 00:27:17.935 Delete Endurance Group: Not Supported 00:27:17.935 Delete NVM Set: Not Supported 00:27:17.935 Extended LBA Formats Supported: Not Supported 00:27:17.935 Flexible Data Placement Supported: Not Supported 00:27:17.935 00:27:17.935 Controller Memory Buffer Support 00:27:17.935 ================================ 00:27:17.935 Supported: No 00:27:17.935 00:27:17.935 Persistent Memory Region Support 00:27:17.935 ================================ 00:27:17.935 Supported: No 00:27:17.935 00:27:17.935 Admin Command Set Attributes 00:27:17.935 ============================ 00:27:17.935 Security Send/Receive: Not Supported 00:27:17.935 Format NVM: Not Supported 00:27:17.935 Firmware Activate/Download: Not Supported 00:27:17.935 Namespace Management: Not Supported 00:27:17.935 Device Self-Test: Not Supported 00:27:17.935 Directives: Not Supported 00:27:17.935 NVMe-MI: Not Supported 00:27:17.935 Virtualization Management: Not Supported 00:27:17.935 Doorbell Buffer Config: Not Supported 00:27:17.935 Get LBA Status Capability: Not Supported 00:27:17.935 Command & Feature Lockdown Capability: Not Supported 00:27:17.935 Abort Command Limit: 1 00:27:17.935 Async Event Request Limit: 1 00:27:17.935 Number of Firmware Slots: N/A 00:27:17.935 Firmware Slot 1 Read-Only: N/A 00:27:17.935 Firmware Activation Without Reset: N/A 00:27:17.935 Multiple Update Detection Support: N/A 00:27:17.935 Firmware Update Granularity: No Information Provided 00:27:17.935 Per-Namespace SMART Log: No 00:27:17.935 Asymmetric Namespace Access Log Page: Not Supported 00:27:17.935 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:17.935 Command Effects Log Page: Not Supported 00:27:17.935 Get Log Page Extended Data: Supported 00:27:17.935 Telemetry Log Pages: Not Supported 00:27:17.935 Persistent Event Log Pages: Not Supported 00:27:17.935 Supported Log Pages Log Page: May Support 00:27:17.935 Commands Supported & Effects Log Page: Not Supported 00:27:17.935 Feature Identifiers & Effects Log Page:May Support 00:27:17.935 NVMe-MI Commands & Effects Log Page: May Support 00:27:17.935 Data Area 4 for Telemetry Log: Not Supported 00:27:17.935 Error Log Page Entries Supported: 1 00:27:17.935 Keep Alive: Not Supported 00:27:17.935 00:27:17.935 NVM Command Set Attributes 00:27:17.935 ========================== 00:27:17.935 Submission Queue Entry Size 00:27:17.935 Max: 1 00:27:17.935 Min: 1 00:27:17.935 Completion Queue Entry Size 00:27:17.935 Max: 1 00:27:17.935 Min: 1 00:27:17.935 Number of Namespaces: 0 00:27:17.935 Compare Command: Not Supported 00:27:17.935 Write Uncorrectable Command: Not Supported 00:27:17.935 Dataset Management Command: Not Supported 00:27:17.935 Write Zeroes Command: Not Supported 00:27:17.935 Set Features Save Field: Not Supported 00:27:17.935 Reservations: Not Supported 00:27:17.935 Timestamp: Not Supported 00:27:17.935 Copy: Not Supported 00:27:17.935 Volatile Write Cache: Not Present 00:27:17.935 Atomic Write Unit (Normal): 1 00:27:17.935 Atomic Write Unit (PFail): 1 00:27:17.935 Atomic Compare & Write Unit: 1 00:27:17.935 Fused Compare & Write: Not Supported 00:27:17.935 Scatter-Gather List 00:27:17.935 SGL Command Set: Supported 00:27:17.935 SGL Keyed: Not Supported 00:27:17.935 SGL Bit Bucket Descriptor: Not Supported 00:27:17.935 SGL Metadata Pointer: Not Supported 00:27:17.935 Oversized SGL: Not Supported 00:27:17.935 SGL Metadata Address: Not Supported 00:27:17.935 SGL Offset: Supported 00:27:17.935 Transport SGL Data Block: Not Supported 00:27:17.935 Replay Protected Memory Block: Not Supported 00:27:17.935 00:27:17.935 Firmware Slot Information 00:27:17.935 ========================= 00:27:17.935 Active slot: 0 00:27:17.935 00:27:17.935 00:27:17.935 Error Log 00:27:17.935 ========= 00:27:17.935 00:27:17.935 Active Namespaces 00:27:17.935 ================= 00:27:17.935 Discovery Log Page 00:27:17.935 ================== 00:27:17.935 Generation Counter: 2 00:27:17.935 Number of Records: 2 00:27:17.935 Record Format: 0 00:27:17.935 00:27:17.935 Discovery Log Entry 0 00:27:17.935 ---------------------- 00:27:17.935 Transport Type: 3 (TCP) 00:27:17.935 Address Family: 1 (IPv4) 00:27:17.935 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:17.935 Entry Flags: 00:27:17.935 Duplicate Returned Information: 0 00:27:17.935 Explicit Persistent Connection Support for Discovery: 0 00:27:17.935 Transport Requirements: 00:27:17.935 Secure Channel: Not Specified 00:27:17.935 Port ID: 1 (0x0001) 00:27:17.935 Controller ID: 65535 (0xffff) 00:27:17.935 Admin Max SQ Size: 32 00:27:17.935 Transport Service Identifier: 4420 00:27:17.935 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:17.935 Transport Address: 10.0.0.1 00:27:17.935 Discovery Log Entry 1 00:27:17.935 ---------------------- 00:27:17.935 Transport Type: 3 (TCP) 00:27:17.935 Address Family: 1 (IPv4) 00:27:17.935 Subsystem Type: 2 (NVM Subsystem) 00:27:17.935 Entry Flags: 00:27:17.935 Duplicate Returned Information: 0 00:27:17.935 Explicit Persistent Connection Support for Discovery: 0 00:27:17.935 Transport Requirements: 00:27:17.935 Secure Channel: Not Specified 00:27:17.935 Port ID: 1 (0x0001) 00:27:17.935 Controller ID: 65535 (0xffff) 00:27:17.935 Admin Max SQ Size: 32 00:27:17.935 Transport Service Identifier: 4420 00:27:17.935 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:17.935 Transport Address: 10.0.0.1 00:27:17.935 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.935 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.935 get_feature(0x01) failed 00:27:17.935 get_feature(0x02) failed 00:27:17.935 get_feature(0x04) failed 00:27:17.935 ===================================================== 00:27:17.935 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.935 ===================================================== 00:27:17.935 Controller Capabilities/Features 00:27:17.935 ================================ 00:27:17.935 Vendor ID: 0000 00:27:17.935 Subsystem Vendor ID: 0000 00:27:17.935 Serial Number: 94e5e7ab81eb20eae56d 00:27:17.935 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:17.935 Firmware Version: 6.7.0-68 00:27:17.935 Recommended Arb Burst: 6 00:27:17.935 IEEE OUI Identifier: 00 00 00 00:27:17.935 Multi-path I/O 00:27:17.935 May have multiple subsystem ports: Yes 00:27:17.935 May have multiple controllers: Yes 00:27:17.935 Associated with SR-IOV VF: No 00:27:17.935 Max Data Transfer Size: Unlimited 00:27:17.935 Max Number of Namespaces: 1024 00:27:17.935 Max Number of I/O Queues: 128 00:27:17.935 NVMe Specification Version (VS): 1.3 00:27:17.935 NVMe Specification Version (Identify): 1.3 00:27:17.935 Maximum Queue Entries: 1024 00:27:17.935 Contiguous Queues Required: No 00:27:17.935 Arbitration Mechanisms Supported 00:27:17.935 Weighted Round Robin: Not Supported 00:27:17.935 Vendor Specific: Not Supported 00:27:17.935 Reset Timeout: 7500 ms 00:27:17.935 Doorbell Stride: 4 bytes 00:27:17.935 NVM Subsystem Reset: Not Supported 00:27:17.935 Command Sets Supported 00:27:17.935 NVM Command Set: Supported 00:27:17.935 Boot Partition: Not Supported 00:27:17.935 Memory Page Size Minimum: 4096 bytes 00:27:17.936 Memory Page Size Maximum: 4096 bytes 00:27:17.936 Persistent Memory Region: Not Supported 00:27:17.936 Optional Asynchronous Events Supported 00:27:17.936 Namespace Attribute Notices: Supported 00:27:17.936 Firmware Activation Notices: Not Supported 00:27:17.936 ANA Change Notices: Supported 00:27:17.936 PLE Aggregate Log Change Notices: Not Supported 00:27:17.936 LBA Status Info Alert Notices: Not Supported 00:27:17.936 EGE Aggregate Log Change Notices: Not Supported 00:27:17.936 Normal NVM Subsystem Shutdown event: Not Supported 00:27:17.936 Zone Descriptor Change Notices: Not Supported 00:27:17.936 Discovery Log Change Notices: Not Supported 00:27:17.936 Controller Attributes 00:27:17.936 128-bit Host Identifier: Supported 00:27:17.936 Non-Operational Permissive Mode: Not Supported 00:27:17.936 NVM Sets: Not Supported 00:27:17.936 Read Recovery Levels: Not Supported 00:27:17.936 Endurance Groups: Not Supported 00:27:17.936 Predictable Latency Mode: Not Supported 00:27:17.936 Traffic Based Keep ALive: Supported 00:27:17.936 Namespace Granularity: Not Supported 00:27:17.936 SQ Associations: Not Supported 00:27:17.936 UUID List: Not Supported 00:27:17.936 Multi-Domain Subsystem: Not Supported 00:27:17.936 Fixed Capacity Management: Not Supported 00:27:17.936 Variable Capacity Management: Not Supported 00:27:17.936 Delete Endurance Group: Not Supported 00:27:17.936 Delete NVM Set: Not Supported 00:27:17.936 Extended LBA Formats Supported: Not Supported 00:27:17.936 Flexible Data Placement Supported: Not Supported 00:27:17.936 00:27:17.936 Controller Memory Buffer Support 00:27:17.936 ================================ 00:27:17.936 Supported: No 00:27:17.936 00:27:17.936 Persistent Memory Region Support 00:27:17.936 ================================ 00:27:17.936 Supported: No 00:27:17.936 00:27:17.936 Admin Command Set Attributes 00:27:17.936 ============================ 00:27:17.936 Security Send/Receive: Not Supported 00:27:17.936 Format NVM: Not Supported 00:27:17.936 Firmware Activate/Download: Not Supported 00:27:17.936 Namespace Management: Not Supported 00:27:17.936 Device Self-Test: Not Supported 00:27:17.936 Directives: Not Supported 00:27:17.936 NVMe-MI: Not Supported 00:27:17.936 Virtualization Management: Not Supported 00:27:17.936 Doorbell Buffer Config: Not Supported 00:27:17.936 Get LBA Status Capability: Not Supported 00:27:17.936 Command & Feature Lockdown Capability: Not Supported 00:27:17.936 Abort Command Limit: 4 00:27:17.936 Async Event Request Limit: 4 00:27:17.936 Number of Firmware Slots: N/A 00:27:17.936 Firmware Slot 1 Read-Only: N/A 00:27:17.936 Firmware Activation Without Reset: N/A 00:27:17.936 Multiple Update Detection Support: N/A 00:27:17.936 Firmware Update Granularity: No Information Provided 00:27:17.936 Per-Namespace SMART Log: Yes 00:27:17.936 Asymmetric Namespace Access Log Page: Supported 00:27:17.936 ANA Transition Time : 10 sec 00:27:17.936 00:27:17.936 Asymmetric Namespace Access Capabilities 00:27:17.936 ANA Optimized State : Supported 00:27:17.936 ANA Non-Optimized State : Supported 00:27:17.936 ANA Inaccessible State : Supported 00:27:17.936 ANA Persistent Loss State : Supported 00:27:17.936 ANA Change State : Supported 00:27:17.936 ANAGRPID is not changed : No 00:27:17.936 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:17.936 00:27:17.936 ANA Group Identifier Maximum : 128 00:27:17.936 Number of ANA Group Identifiers : 128 00:27:17.936 Max Number of Allowed Namespaces : 1024 00:27:17.936 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:17.936 Command Effects Log Page: Supported 00:27:17.936 Get Log Page Extended Data: Supported 00:27:17.936 Telemetry Log Pages: Not Supported 00:27:17.936 Persistent Event Log Pages: Not Supported 00:27:17.936 Supported Log Pages Log Page: May Support 00:27:17.936 Commands Supported & Effects Log Page: Not Supported 00:27:17.936 Feature Identifiers & Effects Log Page:May Support 00:27:17.936 NVMe-MI Commands & Effects Log Page: May Support 00:27:17.936 Data Area 4 for Telemetry Log: Not Supported 00:27:17.936 Error Log Page Entries Supported: 128 00:27:17.936 Keep Alive: Supported 00:27:17.936 Keep Alive Granularity: 1000 ms 00:27:17.936 00:27:17.936 NVM Command Set Attributes 00:27:17.936 ========================== 00:27:17.936 Submission Queue Entry Size 00:27:17.936 Max: 64 00:27:17.936 Min: 64 00:27:17.936 Completion Queue Entry Size 00:27:17.936 Max: 16 00:27:17.936 Min: 16 00:27:17.936 Number of Namespaces: 1024 00:27:17.936 Compare Command: Not Supported 00:27:17.936 Write Uncorrectable Command: Not Supported 00:27:17.936 Dataset Management Command: Supported 00:27:17.936 Write Zeroes Command: Supported 00:27:17.936 Set Features Save Field: Not Supported 00:27:17.936 Reservations: Not Supported 00:27:17.936 Timestamp: Not Supported 00:27:17.936 Copy: Not Supported 00:27:17.936 Volatile Write Cache: Present 00:27:17.936 Atomic Write Unit (Normal): 1 00:27:17.936 Atomic Write Unit (PFail): 1 00:27:17.936 Atomic Compare & Write Unit: 1 00:27:17.936 Fused Compare & Write: Not Supported 00:27:17.936 Scatter-Gather List 00:27:17.936 SGL Command Set: Supported 00:27:17.936 SGL Keyed: Not Supported 00:27:17.936 SGL Bit Bucket Descriptor: Not Supported 00:27:17.936 SGL Metadata Pointer: Not Supported 00:27:17.936 Oversized SGL: Not Supported 00:27:17.936 SGL Metadata Address: Not Supported 00:27:17.936 SGL Offset: Supported 00:27:17.936 Transport SGL Data Block: Not Supported 00:27:17.936 Replay Protected Memory Block: Not Supported 00:27:17.936 00:27:17.936 Firmware Slot Information 00:27:17.936 ========================= 00:27:17.936 Active slot: 0 00:27:17.936 00:27:17.936 Asymmetric Namespace Access 00:27:17.936 =========================== 00:27:17.936 Change Count : 0 00:27:17.936 Number of ANA Group Descriptors : 1 00:27:17.936 ANA Group Descriptor : 0 00:27:17.936 ANA Group ID : 1 00:27:17.936 Number of NSID Values : 1 00:27:17.936 Change Count : 0 00:27:17.936 ANA State : 1 00:27:17.936 Namespace Identifier : 1 00:27:17.936 00:27:17.936 Commands Supported and Effects 00:27:17.936 ============================== 00:27:17.936 Admin Commands 00:27:17.936 -------------- 00:27:17.936 Get Log Page (02h): Supported 00:27:17.936 Identify (06h): Supported 00:27:17.936 Abort (08h): Supported 00:27:17.936 Set Features (09h): Supported 00:27:17.936 Get Features (0Ah): Supported 00:27:17.936 Asynchronous Event Request (0Ch): Supported 00:27:17.936 Keep Alive (18h): Supported 00:27:17.936 I/O Commands 00:27:17.936 ------------ 00:27:17.936 Flush (00h): Supported 00:27:17.936 Write (01h): Supported LBA-Change 00:27:17.936 Read (02h): Supported 00:27:17.936 Write Zeroes (08h): Supported LBA-Change 00:27:17.936 Dataset Management (09h): Supported 00:27:17.936 00:27:17.936 Error Log 00:27:17.936 ========= 00:27:17.936 Entry: 0 00:27:17.936 Error Count: 0x3 00:27:17.936 Submission Queue Id: 0x0 00:27:17.936 Command Id: 0x5 00:27:17.936 Phase Bit: 0 00:27:17.936 Status Code: 0x2 00:27:17.936 Status Code Type: 0x0 00:27:17.936 Do Not Retry: 1 00:27:17.936 Error Location: 0x28 00:27:17.936 LBA: 0x0 00:27:17.936 Namespace: 0x0 00:27:17.936 Vendor Log Page: 0x0 00:27:17.936 ----------- 00:27:17.936 Entry: 1 00:27:17.936 Error Count: 0x2 00:27:17.936 Submission Queue Id: 0x0 00:27:17.936 Command Id: 0x5 00:27:17.936 Phase Bit: 0 00:27:17.936 Status Code: 0x2 00:27:17.936 Status Code Type: 0x0 00:27:17.936 Do Not Retry: 1 00:27:17.936 Error Location: 0x28 00:27:17.936 LBA: 0x0 00:27:17.936 Namespace: 0x0 00:27:17.936 Vendor Log Page: 0x0 00:27:17.936 ----------- 00:27:17.936 Entry: 2 00:27:17.936 Error Count: 0x1 00:27:17.936 Submission Queue Id: 0x0 00:27:17.936 Command Id: 0x4 00:27:17.936 Phase Bit: 0 00:27:17.936 Status Code: 0x2 00:27:17.936 Status Code Type: 0x0 00:27:17.936 Do Not Retry: 1 00:27:17.936 Error Location: 0x28 00:27:17.936 LBA: 0x0 00:27:17.936 Namespace: 0x0 00:27:17.936 Vendor Log Page: 0x0 00:27:17.936 00:27:17.936 Number of Queues 00:27:17.936 ================ 00:27:17.936 Number of I/O Submission Queues: 128 00:27:17.936 Number of I/O Completion Queues: 128 00:27:17.936 00:27:17.936 ZNS Specific Controller Data 00:27:17.936 ============================ 00:27:17.936 Zone Append Size Limit: 0 00:27:17.936 00:27:17.936 00:27:17.936 Active Namespaces 00:27:17.936 ================= 00:27:17.936 get_feature(0x05) failed 00:27:17.936 Namespace ID:1 00:27:17.936 Command Set Identifier: NVM (00h) 00:27:17.936 Deallocate: Supported 00:27:17.937 Deallocated/Unwritten Error: Not Supported 00:27:17.937 Deallocated Read Value: Unknown 00:27:17.937 Deallocate in Write Zeroes: Not Supported 00:27:17.937 Deallocated Guard Field: 0xFFFF 00:27:17.937 Flush: Supported 00:27:17.937 Reservation: Not Supported 00:27:17.937 Namespace Sharing Capabilities: Multiple Controllers 00:27:17.937 Size (in LBAs): 3750748848 (1788GiB) 00:27:17.937 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:17.937 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:17.937 UUID: a3b9fc78-22fc-4ba8-881b-402eb2689967 00:27:17.937 Thin Provisioning: Not Supported 00:27:17.937 Per-NS Atomic Units: Yes 00:27:17.937 Atomic Write Unit (Normal): 8 00:27:17.937 Atomic Write Unit (PFail): 8 00:27:17.937 Preferred Write Granularity: 8 00:27:17.937 Atomic Compare & Write Unit: 8 00:27:17.937 Atomic Boundary Size (Normal): 0 00:27:17.937 Atomic Boundary Size (PFail): 0 00:27:17.937 Atomic Boundary Offset: 0 00:27:17.937 NGUID/EUI64 Never Reused: No 00:27:17.937 ANA group ID: 1 00:27:17.937 Namespace Write Protected: No 00:27:17.937 Number of LBA Formats: 1 00:27:17.937 Current LBA Format: LBA Format #00 00:27:17.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:17.937 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.937 rmmod nvme_tcp 00:27:17.937 rmmod nvme_fabrics 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.937 12:31:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:20.482 12:31:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.725 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:24.725 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:24.725 00:27:24.725 real 0m20.055s 00:27:24.725 user 0m5.555s 00:27:24.725 sys 0m11.611s 00:27:24.725 12:31:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:24.725 12:31:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.725 ************************************ 00:27:24.725 END TEST nvmf_identify_kernel_target 00:27:24.725 ************************************ 00:27:24.725 12:31:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:24.725 12:31:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:24.725 12:31:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:24.725 12:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.725 ************************************ 00:27:24.725 START TEST nvmf_auth_host 00:27:24.726 ************************************ 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:24.726 * Looking for test storage... 00:27:24.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.726 12:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:32.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:32.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:32.864 Found net devices under 0000:31:00.0: cvl_0_0 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:32.864 Found net devices under 0000:31:00.1: cvl_0_1 00:27:32.864 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.865 12:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:27:32.865 00:27:32.865 --- 10.0.0.2 ping statistics --- 00:27:32.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.865 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:32.865 00:27:32.865 --- 10.0.0.1 ping statistics --- 00:27:32.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.865 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=816013 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 816013 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 816013 ']' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:32.865 12:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e739af6d23f487648310bff1d123d706 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MZd 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e739af6d23f487648310bff1d123d706 0 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e739af6d23f487648310bff1d123d706 0 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e739af6d23f487648310bff1d123d706 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MZd 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MZd 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.MZd 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=05e32ddb63b88d8b1e776be02d1987c1818c1746d3cb585592786f92131dc218 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nej 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 05e32ddb63b88d8b1e776be02d1987c1818c1746d3cb585592786f92131dc218 3 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 05e32ddb63b88d8b1e776be02d1987c1818c1746d3cb585592786f92131dc218 3 00:27:33.804 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=05e32ddb63b88d8b1e776be02d1987c1818c1746d3cb585592786f92131dc218 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nej 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nej 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nej 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57a548df2e34046ccec1c2aa83beaf06e8b7b7d83395bbcd 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nwT 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57a548df2e34046ccec1c2aa83beaf06e8b7b7d83395bbcd 0 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57a548df2e34046ccec1c2aa83beaf06e8b7b7d83395bbcd 0 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57a548df2e34046ccec1c2aa83beaf06e8b7b7d83395bbcd 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nwT 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nwT 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nwT 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e0dff3adb40d46bb098edf69c41ad4b71f7b64767d40f6a 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hv3 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e0dff3adb40d46bb098edf69c41ad4b71f7b64767d40f6a 2 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e0dff3adb40d46bb098edf69c41ad4b71f7b64767d40f6a 2 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e0dff3adb40d46bb098edf69c41ad4b71f7b64767d40f6a 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hv3 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hv3 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Hv3 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=407db900594fb0ecdba3b9c6afecf678 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lkk 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 407db900594fb0ecdba3b9c6afecf678 1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 407db900594fb0ecdba3b9c6afecf678 1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=407db900594fb0ecdba3b9c6afecf678 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:33.805 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lkk 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lkk 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.lkk 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=49ce7417aaa7cce983a4360e741ff3c1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VQg 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 49ce7417aaa7cce983a4360e741ff3c1 1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 49ce7417aaa7cce983a4360e741ff3c1 1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=49ce7417aaa7cce983a4360e741ff3c1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:34.065 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VQg 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VQg 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VQg 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e5d30fd6f0efb0c12e493d6550bdf4aadb6523500bda37cc 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Yfm 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e5d30fd6f0efb0c12e493d6550bdf4aadb6523500bda37cc 2 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e5d30fd6f0efb0c12e493d6550bdf4aadb6523500bda37cc 2 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e5d30fd6f0efb0c12e493d6550bdf4aadb6523500bda37cc 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Yfm 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Yfm 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Yfm 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2372f6377e017e1b77aa4948c82f955f 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.txz 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2372f6377e017e1b77aa4948c82f955f 0 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2372f6377e017e1b77aa4948c82f955f 0 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2372f6377e017e1b77aa4948c82f955f 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.txz 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.txz 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.txz 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07fd94578336683a07077730a455d8747a5532e95e0eb00c7122af7cb99795d5 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DA1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07fd94578336683a07077730a455d8747a5532e95e0eb00c7122af7cb99795d5 3 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07fd94578336683a07077730a455d8747a5532e95e0eb00c7122af7cb99795d5 3 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07fd94578336683a07077730a455d8747a5532e95e0eb00c7122af7cb99795d5 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DA1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DA1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DA1 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 816013 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 816013 ']' 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:34.066 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MZd 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nej ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nej 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nwT 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Hv3 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hv3 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.lkk 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VQg ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VQg 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Yfm 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.txz ]] 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.txz 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.327 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DA1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:34.588 12:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:38.788 Waiting for block devices as requested 00:27:38.788 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:38.788 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:38.788 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:39.049 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:39.049 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:39.049 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:39.309 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:39.309 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:39.309 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:39.309 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:40.251 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:40.251 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:40.251 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:40.251 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:40.251 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:40.252 No valid GPT data, bailing 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:27:40.252 00:27:40.252 Discovery Log Number of Records 2, Generation counter 2 00:27:40.252 =====Discovery Log Entry 0====== 00:27:40.252 trtype: tcp 00:27:40.252 adrfam: ipv4 00:27:40.252 subtype: current discovery subsystem 00:27:40.252 treq: not specified, sq flow control disable supported 00:27:40.252 portid: 1 00:27:40.252 trsvcid: 4420 00:27:40.252 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:40.252 traddr: 10.0.0.1 00:27:40.252 eflags: none 00:27:40.252 sectype: none 00:27:40.252 =====Discovery Log Entry 1====== 00:27:40.252 trtype: tcp 00:27:40.252 adrfam: ipv4 00:27:40.252 subtype: nvme subsystem 00:27:40.252 treq: not specified, sq flow control disable supported 00:27:40.252 portid: 1 00:27:40.252 trsvcid: 4420 00:27:40.252 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:40.252 traddr: 10.0.0.1 00:27:40.252 eflags: none 00:27:40.252 sectype: none 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.252 nvme0n1 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.252 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.253 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.513 12:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 nvme0n1 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.513 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.514 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 nvme0n1 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.036 nvme0n1 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.036 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.296 nvme0n1 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.296 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.297 nvme0n1 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.297 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.558 12:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.558 nvme0n1 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.558 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.818 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.818 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.818 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.819 nvme0n1 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.819 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.079 nvme0n1 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.079 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.339 nvme0n1 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.339 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.599 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.600 12:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 nvme0n1 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.860 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.120 nvme0n1 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:43.120 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.121 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.382 nvme0n1 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.382 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.383 12:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.643 nvme0n1 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.643 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.906 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.166 nvme0n1 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.166 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.427 nvme0n1 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.427 12:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.428 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.428 12:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.040 nvme0n1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.040 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.611 nvme0n1 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:45.611 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.612 12:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.612 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.872 nvme0n1 00:27:45.872 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.872 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.872 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.872 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.872 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.873 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.873 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.873 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.873 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.873 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.132 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.133 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.392 nvme0n1 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.393 12:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.659 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.660 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.921 nvme0n1 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.921 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.181 12:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 nvme0n1 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.751 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.011 12:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.582 nvme0n1 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.582 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.523 nvme0n1 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.523 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.524 12:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.094 nvme0n1 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.094 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.354 12:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 nvme0n1 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.926 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.187 nvme0n1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.187 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.448 nvme0n1 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:51.448 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.449 12:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 nvme0n1 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 nvme0n1 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.711 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.972 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.973 nvme0n1 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.973 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 nvme0n1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.234 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.495 12:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.495 nvme0n1 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.495 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.757 nvme0n1 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.757 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.018 nvme0n1 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.018 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 nvme0n1 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.279 12:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.539 nvme0n1 00:27:53.539 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.539 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.539 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.540 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.540 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.800 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.801 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 nvme0n1 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.061 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.322 nvme0n1 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.322 12:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.582 nvme0n1 00:27:54.582 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.582 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.582 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.582 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.582 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.842 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.842 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.842 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.842 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.842 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.843 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 nvme0n1 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.103 12:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.674 nvme0n1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.674 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 nvme0n1 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.246 12:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.507 nvme0n1 00:27:56.507 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.769 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.340 nvme0n1 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.340 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.341 12:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.600 nvme0n1 00:27:57.600 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.600 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.600 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.600 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.600 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.601 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.862 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.434 nvme0n1 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.434 12:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.434 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.695 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.302 nvme0n1 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.302 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.303 12:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.245 nvme0n1 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.245 12:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.818 nvme0n1 00:28:00.818 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.819 12:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 nvme0n1 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 nvme0n1 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 nvme0n1 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:02.022 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.023 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 nvme0n1 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.283 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.284 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 nvme0n1 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 12:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 nvme0n1 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.873 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.874 nvme0n1 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.874 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:03.133 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.134 nvme0n1 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.134 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:03.394 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.395 nvme0n1 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.395 12:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 nvme0n1 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.915 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.915 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.915 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.916 nvme0n1 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.916 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.176 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.436 nvme0n1 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.436 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.437 12:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 nvme0n1 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.697 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.956 nvme0n1 00:28:04.956 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.956 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.957 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.957 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.957 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.957 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.215 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.215 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.215 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.216 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.475 nvme0n1 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.475 12:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.736 nvme0n1 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.736 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.308 nvme0n1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.308 12:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 nvme0n1 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.879 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.880 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.451 nvme0n1 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.451 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.452 12:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.023 nvme0n1 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.023 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.284 nvme0n1 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.284 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTczOWFmNmQyM2Y0ODc2NDgzMTBiZmYxZDEyM2Q3MDaxwYQ2: 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDVlMzJkZGI2M2I4OGQ4YjFlNzc2YmUwMmQxOTg3YzE4MThjMTc0NmQzY2I1ODU1OTI3ODZmOTIxMzFkYzIxOOfYB68=: 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.545 12:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.115 nvme0n1 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.115 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.376 12:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.947 nvme0n1 00:28:09.947 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.947 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA3ZGI5MDA1OTRmYjBlY2RiYTNiOWM2YWZlY2Y2NzjdRVTs: 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDljZTc0MTdhYWE3Y2NlOTgzYTQzNjBlNzQxZmYzYzH0fvb3: 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.948 12:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.890 nvme0n1 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVkMzBmZDZmMGVmYjBjMTJlNDkzZDY1NTBiZGY0YWFkYjY1MjM1MDBiZGEzN2NjN2NgmQ==: 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjM3MmY2Mzc3ZTAxN2UxYjc3YWE0OTQ4YzgyZjk1NWbWt6Ge: 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.890 12:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.462 nvme0n1 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.462 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdmZDk0NTc4MzM2NjgzYTA3MDc3NzMwYTQ1NWQ4NzQ3YTU1MzJlOTVlMGViMDBjNzEyMmFmN2NiOTk3OTVkNWRgDTg=: 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.724 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.296 nvme0n1 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.296 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdhNTQ4ZGYyZTM0MDQ2Y2NlYzFjMmFhODNiZWFmMDZlOGI3YjdkODMzOTViYmNk94BMCw==: 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUwZGZmM2FkYjQwZDQ2YmIwOThlZGY2OWM0MWFkNGI3MWY3YjY0NzY3ZDQwZjZhypMGJA==: 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 request: 00:28:12.559 { 00:28:12.559 "name": "nvme0", 00:28:12.559 "trtype": "tcp", 00:28:12.559 "traddr": "10.0.0.1", 00:28:12.559 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.559 "adrfam": "ipv4", 00:28:12.559 "trsvcid": "4420", 00:28:12.559 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.559 "method": "bdev_nvme_attach_controller", 00:28:12.559 "req_id": 1 00:28:12.559 } 00:28:12.559 Got JSON-RPC error response 00:28:12.559 response: 00:28:12.559 { 00:28:12.559 "code": -5, 00:28:12.559 "message": "Input/output error" 00:28:12.559 } 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 12:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.559 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 request: 00:28:12.559 { 00:28:12.559 "name": "nvme0", 00:28:12.559 "trtype": "tcp", 00:28:12.559 "traddr": "10.0.0.1", 00:28:12.560 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.560 "adrfam": "ipv4", 00:28:12.560 "trsvcid": "4420", 00:28:12.560 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.560 "dhchap_key": "key2", 00:28:12.560 "method": "bdev_nvme_attach_controller", 00:28:12.560 "req_id": 1 00:28:12.560 } 00:28:12.560 Got JSON-RPC error response 00:28:12.560 response: 00:28:12.560 { 00:28:12.560 "code": -5, 00:28:12.560 "message": "Input/output error" 00:28:12.560 } 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.560 request: 00:28:12.560 { 00:28:12.560 "name": "nvme0", 00:28:12.560 "trtype": "tcp", 00:28:12.560 "traddr": "10.0.0.1", 00:28:12.560 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.560 "adrfam": "ipv4", 00:28:12.560 "trsvcid": "4420", 00:28:12.560 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.560 "dhchap_key": "key1", 00:28:12.560 "dhchap_ctrlr_key": "ckey2", 00:28:12.560 "method": "bdev_nvme_attach_controller", 00:28:12.560 "req_id": 1 00:28:12.560 } 00:28:12.560 Got JSON-RPC error response 00:28:12.560 response: 00:28:12.560 { 00:28:12.560 "code": -5, 00:28:12.560 "message": "Input/output error" 00:28:12.560 } 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.560 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.821 rmmod nvme_tcp 00:28:12.821 rmmod nvme_fabrics 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 816013 ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 816013 ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 816013' 00:28:12.821 killing process with pid 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 816013 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.821 12:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:15.418 12:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.720 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.720 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.981 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:18.981 12:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.MZd /tmp/spdk.key-null.nwT /tmp/spdk.key-sha256.lkk /tmp/spdk.key-sha384.Yfm /tmp/spdk.key-sha512.DA1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:18.981 12:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.184 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:23.184 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:23.184 00:28:23.184 real 0m58.579s 00:28:23.184 user 0m51.512s 00:28:23.184 sys 0m16.116s 00:28:23.184 12:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:23.184 12:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.184 ************************************ 00:28:23.184 END TEST nvmf_auth_host 00:28:23.184 ************************************ 00:28:23.184 12:32:28 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:23.184 12:32:28 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:23.184 12:32:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:23.184 12:32:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:23.184 12:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.184 ************************************ 00:28:23.184 START TEST nvmf_digest 00:28:23.184 ************************************ 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:23.184 * Looking for test storage... 00:28:23.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.184 12:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.185 12:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:31.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:31.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:31.324 Found net devices under 0000:31:00.0: cvl_0_0 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.324 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:31.324 Found net devices under 0000:31:00.1: cvl_0_1 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:31.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.757 ms 00:28:31.325 00:28:31.325 --- 10.0.0.2 ping statistics --- 00:28:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.325 rtt min/avg/max/mdev = 0.757/0.757/0.757/0.000 ms 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:28:31.325 00:28:31.325 --- 10.0.0.1 ping statistics --- 00:28:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.325 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:31.325 ************************************ 00:28:31.325 START TEST nvmf_digest_clean 00:28:31.325 ************************************ 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=833213 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 833213 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 833213 ']' 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:31.325 12:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.325 [2024-06-10 12:32:36.522291] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:31.325 [2024-06-10 12:32:36.522355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.325 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.325 [2024-06-10 12:32:36.600473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.325 [2024-06-10 12:32:36.673804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.325 [2024-06-10 12:32:36.673844] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.325 [2024-06-10 12:32:36.673851] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.325 [2024-06-10 12:32:36.673858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.325 [2024-06-10 12:32:36.673864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.325 [2024-06-10 12:32:36.673887] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.897 null0 00:28:31.897 [2024-06-10 12:32:37.383373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.897 [2024-06-10 12:32:37.407541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=833508 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 833508 /var/tmp/bperf.sock 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 833508 ']' 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:31.897 12:32:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.897 [2024-06-10 12:32:37.463192] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:31.897 [2024-06-10 12:32:37.463244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833508 ] 00:28:31.897 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.158 [2024-06-10 12:32:37.545677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.158 [2024-06-10 12:32:37.609777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.728 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:32.728 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:32.728 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.728 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.728 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.988 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.988 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.248 nvme0n1 00:28:33.248 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:33.248 12:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.248 Running I/O for 2 seconds... 00:28:35.793 00:28:35.793 Latency(us) 00:28:35.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:35.793 nvme0n1 : 2.00 20137.02 78.66 0.00 0.00 6347.07 3194.88 21408.43 00:28:35.793 =================================================================================================================== 00:28:35.793 Total : 20137.02 78.66 0.00 0.00 6347.07 3194.88 21408.43 00:28:35.793 0 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.793 | select(.opcode=="crc32c") 00:28:35.793 | "\(.module_name) \(.executed)"' 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.793 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 833508 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 833508 ']' 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 833508 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:35.794 12:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 833508 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 833508' 00:28:35.794 killing process with pid 833508 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 833508 00:28:35.794 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.794 00:28:35.794 Latency(us) 00:28:35.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.794 =================================================================================================================== 00:28:35.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 833508 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=834188 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 834188 /var/tmp/bperf.sock 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 834188 ']' 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:35.794 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.794 [2024-06-10 12:32:41.203983] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:35.794 [2024-06-10 12:32:41.204038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834188 ] 00:28:35.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.794 Zero copy mechanism will not be used. 00:28:35.794 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.794 [2024-06-10 12:32:41.287915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.794 [2024-06-10 12:32:41.350895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.364 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:36.364 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:36.364 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:36.364 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:36.364 12:32:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.629 12:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.629 12:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.203 nvme0n1 00:28:37.203 12:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:37.203 12:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.203 Zero copy mechanism will not be used. 00:28:37.203 Running I/O for 2 seconds... 00:28:39.118 00:28:39.118 Latency(us) 00:28:39.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.118 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:39.118 nvme0n1 : 2.01 3188.12 398.52 0.00 0.00 5015.41 1262.93 13871.79 00:28:39.118 =================================================================================================================== 00:28:39.118 Total : 3188.12 398.52 0.00 0.00 5015.41 1262.93 13871.79 00:28:39.118 0 00:28:39.118 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:39.118 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:39.118 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:39.118 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:39.118 | select(.opcode=="crc32c") 00:28:39.118 | "\(.module_name) \(.executed)"' 00:28:39.118 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 834188 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 834188 ']' 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 834188 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 834188 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 834188' 00:28:39.390 killing process with pid 834188 00:28:39.390 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 834188 00:28:39.390 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.390 00:28:39.390 Latency(us) 00:28:39.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.390 =================================================================================================================== 00:28:39.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 834188 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=834874 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 834874 /var/tmp/bperf.sock 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 834874 ']' 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:39.391 12:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.675 [2024-06-10 12:32:45.005728] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:39.675 [2024-06-10 12:32:45.005781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834874 ] 00:28:39.675 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.675 [2024-06-10 12:32:45.088051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.675 [2024-06-10 12:32:45.140265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.245 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:40.245 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:40.245 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:40.245 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:40.245 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.504 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.504 12:32:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.765 nvme0n1 00:28:40.765 12:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.765 12:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.025 Running I/O for 2 seconds... 00:28:42.935 00:28:42.935 Latency(us) 00:28:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.935 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.935 nvme0n1 : 2.01 21395.97 83.58 0.00 0.00 5971.22 3290.45 9611.95 00:28:42.935 =================================================================================================================== 00:28:42.935 Total : 21395.97 83.58 0.00 0.00 5971.22 3290.45 9611.95 00:28:42.935 0 00:28:42.935 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:42.935 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:42.935 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.935 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.935 | select(.opcode=="crc32c") 00:28:42.935 | "\(.module_name) \(.executed)"' 00:28:42.935 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 834874 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 834874 ']' 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 834874 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 834874 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 834874' 00:28:43.196 killing process with pid 834874 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 834874 00:28:43.196 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.196 00:28:43.196 Latency(us) 00:28:43.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.196 =================================================================================================================== 00:28:43.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 834874 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:43.196 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=835612 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 835612 /var/tmp/bperf.sock 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 835612 ']' 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:43.197 12:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.197 [2024-06-10 12:32:48.786284] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:43.197 [2024-06-10 12:32:48.786340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835612 ] 00:28:43.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.197 Zero copy mechanism will not be used. 00:28:43.457 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.457 [2024-06-10 12:32:48.865149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.457 [2024-06-10 12:32:48.918646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.028 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:44.028 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:44.028 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:44.028 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:44.028 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.288 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.288 12:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.549 nvme0n1 00:28:44.549 12:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:44.549 12:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.549 Zero copy mechanism will not be used. 00:28:44.549 Running I/O for 2 seconds... 00:28:47.093 00:28:47.094 Latency(us) 00:28:47.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:47.094 nvme0n1 : 2.00 7613.69 951.71 0.00 0.00 2097.63 1590.61 8465.07 00:28:47.094 =================================================================================================================== 00:28:47.094 Total : 7613.69 951.71 0.00 0.00 2097.63 1590.61 8465.07 00:28:47.094 0 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:47.094 | select(.opcode=="crc32c") 00:28:47.094 | "\(.module_name) \(.executed)"' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 835612 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 835612 ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 835612 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 835612 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 835612' 00:28:47.094 killing process with pid 835612 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 835612 00:28:47.094 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.094 00:28:47.094 Latency(us) 00:28:47.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.094 =================================================================================================================== 00:28:47.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 835612 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 833213 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 833213 ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 833213 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 833213 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 833213' 00:28:47.094 killing process with pid 833213 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 833213 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 833213 00:28:47.094 00:28:47.094 real 0m16.202s 00:28:47.094 user 0m31.614s 00:28:47.094 sys 0m3.499s 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:47.094 ************************************ 00:28:47.094 END TEST nvmf_digest_clean 00:28:47.094 ************************************ 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:47.094 12:32:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:47.355 ************************************ 00:28:47.355 START TEST nvmf_digest_error 00:28:47.355 ************************************ 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=836556 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 836556 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 836556 ']' 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:47.355 12:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.355 [2024-06-10 12:32:52.792947] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:47.355 [2024-06-10 12:32:52.793005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.355 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.355 [2024-06-10 12:32:52.869641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.355 [2024-06-10 12:32:52.942667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.355 [2024-06-10 12:32:52.942704] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.355 [2024-06-10 12:32:52.942712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.355 [2024-06-10 12:32:52.942718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.355 [2024-06-10 12:32:52.942724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.355 [2024-06-10 12:32:52.942749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.298 [2024-06-10 12:32:53.600641] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.298 null0 00:28:48.298 [2024-06-10 12:32:53.681233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.298 [2024-06-10 12:32:53.705426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.298 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=836623 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 836623 /var/tmp/bperf.sock 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 836623 ']' 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:48.299 12:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.299 [2024-06-10 12:32:53.765371] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:48.299 [2024-06-10 12:32:53.765417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836623 ] 00:28:48.299 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.299 [2024-06-10 12:32:53.844942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.299 [2024-06-10 12:32:53.898720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.241 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.502 nvme0n1 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:49.502 12:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.502 Running I/O for 2 seconds... 00:28:49.502 [2024-06-10 12:32:55.061745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.502 [2024-06-10 12:32:55.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.502 [2024-06-10 12:32:55.061785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.502 [2024-06-10 12:32:55.075472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.502 [2024-06-10 12:32:55.075492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.502 [2024-06-10 12:32:55.075499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.502 [2024-06-10 12:32:55.088378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.502 [2024-06-10 12:32:55.088397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.502 [2024-06-10 12:32:55.088404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.502 [2024-06-10 12:32:55.101464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.502 [2024-06-10 12:32:55.101481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.502 [2024-06-10 12:32:55.101488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.112494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.112512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.112519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.125662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.125680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.125687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.138872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.138890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.138897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.150379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.150396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.150402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.163509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.163533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.177018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.177035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.177042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.188185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.188206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.188213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.198522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.764 [2024-06-10 12:32:55.198538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.764 [2024-06-10 12:32:55.198545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.764 [2024-06-10 12:32:55.211389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.211407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.211414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.224190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.224211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.238050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.238067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.238073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.249746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.249763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.249772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.261969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.261985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.261992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.273017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.273034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.273040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.284663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.284680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.284686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.297019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.297036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.297042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.310443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.310461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.323054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.323071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.336040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.336057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.336063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.347033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.347050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.347056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.765 [2024-06-10 12:32:55.358988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:49.765 [2024-06-10 12:32:55.359005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.765 [2024-06-10 12:32:55.359012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.026 [2024-06-10 12:32:55.371728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.371746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.371752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.384312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.384329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.384335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.397027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.397044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.397050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.408257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.408274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.408280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.420473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.420489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.420496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.434331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.434349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.446520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.446536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.446542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.458353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.458371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.458380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.470431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.470448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.470454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.482997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.483014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.496571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.496588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.496594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.506232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.506250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.506256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.520133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.520150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.520157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.532989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.533006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.544816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.544832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.544838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.556901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.556919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.569762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.580446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.580464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.580470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.592301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.592318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.592325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.606554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.606571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.606578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.618429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.618447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.618454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.027 [2024-06-10 12:32:55.630898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.027 [2024-06-10 12:32:55.630916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.027 [2024-06-10 12:32:55.630923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.641807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.641824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.641831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.655498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.655515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.655522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.670246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.670263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.670269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.684009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.684027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.684034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.694313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.694331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.694337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.706736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.706759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.720318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.720336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.720342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.732790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.732814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.745293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.745317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.759127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.759151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.770052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.770070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.770077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.782209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.782227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.782238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.794829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.794847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.794853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.808644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.808662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.808668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.821038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.821055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.821062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.832776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.289 [2024-06-10 12:32:55.832794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.289 [2024-06-10 12:32:55.832800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.289 [2024-06-10 12:32:55.843632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.290 [2024-06-10 12:32:55.843650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.290 [2024-06-10 12:32:55.843656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.290 [2024-06-10 12:32:55.856481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.290 [2024-06-10 12:32:55.856497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.290 [2024-06-10 12:32:55.856504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.290 [2024-06-10 12:32:55.870569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.290 [2024-06-10 12:32:55.870586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.290 [2024-06-10 12:32:55.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.290 [2024-06-10 12:32:55.882362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.290 [2024-06-10 12:32:55.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.290 [2024-06-10 12:32:55.882385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.290 [2024-06-10 12:32:55.892832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.290 [2024-06-10 12:32:55.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.290 [2024-06-10 12:32:55.892855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.905561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.905578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.905585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.917819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.917836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.931409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.931427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.931433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.943600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.943617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.943624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.956225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.956242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.956248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.968537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.968553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.968560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.978957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.978974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.978981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:55.991965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:55.991982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:55.991992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.005090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.005108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.005114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.018384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.018401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.018408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.031297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.031315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.031321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.043644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.043661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.043668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.053631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.053649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.053655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.065529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.551 [2024-06-10 12:32:56.065546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.551 [2024-06-10 12:32:56.065553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.551 [2024-06-10 12:32:56.078796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.078813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.078819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.090951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.090969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.090975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.104029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.104049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.104056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.117011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.117028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.117035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.128624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.128640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.128647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.140499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.140516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.552 [2024-06-10 12:32:56.152588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.552 [2024-06-10 12:32:56.152606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.552 [2024-06-10 12:32:56.152612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.163359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.163376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.163383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.176732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.176757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.189915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.189933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.189939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.205241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.205259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.205265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.216761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.216778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.216785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.228785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.228803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.241698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.241716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.241723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.253396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.253413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.253420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.265609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.265633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.276817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.276834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.276841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.289726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.289744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.289750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.302281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.302298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.314666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.314683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.814 [2024-06-10 12:32:56.314694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.814 [2024-06-10 12:32:56.327536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.814 [2024-06-10 12:32:56.327554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.327560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.339813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.339830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.339837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.352649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.352666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.352673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.363679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.363696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.363702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.376410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.376426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.376433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.389118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.389135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.389141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.403331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.403347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.403353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.815 [2024-06-10 12:32:56.414721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:50.815 [2024-06-10 12:32:56.414738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.815 [2024-06-10 12:32:56.414744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.425663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.425683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.425690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.439206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.439223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.439230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.452807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.452824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.452831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.464426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.464443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.464449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.474879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.474896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.474903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.488086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.488103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.500930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.500948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.500954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.513395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.513412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.513419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.525540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.525557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.525566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.537124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.537141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.537147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.549225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.549242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.549249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.561959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.561975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.561982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.575154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.575171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.575177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.585493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.585510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.599748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.599765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.599771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.611980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.611997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.612003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.622356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.622373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.622380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.636203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.636224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.636230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.649276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.649293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.661057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.661073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.661080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.077 [2024-06-10 12:32:56.672190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.077 [2024-06-10 12:32:56.672211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.077 [2024-06-10 12:32:56.672218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.339 [2024-06-10 12:32:56.684359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.339 [2024-06-10 12:32:56.684377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.339 [2024-06-10 12:32:56.684383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.339 [2024-06-10 12:32:56.697648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.339 [2024-06-10 12:32:56.697665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.339 [2024-06-10 12:32:56.697672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.339 [2024-06-10 12:32:56.710224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.710241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.710248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.722585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.722602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.722609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.735331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.735348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.735355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.745642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.745659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.745665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.759578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.759595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.759601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.773058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.773075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.773082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.784415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.784439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.797231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.797248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.797255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.809885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.809902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.809908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.822382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.822405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.834064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.834081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.834087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.843990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.844007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.844017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.857375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.857392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.857398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.871164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.871181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.871187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.883395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.883412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.883418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.894804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.894821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.894827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.905880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.905897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.919178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.919199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.919206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.340 [2024-06-10 12:32:56.932809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.340 [2024-06-10 12:32:56.932826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.340 [2024-06-10 12:32:56.932832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:56.944401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:56.944420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:56.944426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:56.956534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:56.956554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:56.956560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:56.968103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:56.968120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:56.968126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:56.981795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:56.981812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:56.981818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:56.995154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:56.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:56.995177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:57.005118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:57.005135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:57.005142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:57.019308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:57.019324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:57.019331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:57.032854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:57.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:57.032877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 [2024-06-10 12:32:57.044447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x193ce60) 00:28:51.602 [2024-06-10 12:32:57.044464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.602 [2024-06-10 12:32:57.044470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.602 00:28:51.602 Latency(us) 00:28:51.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.602 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:51.602 nvme0n1 : 2.00 20559.25 80.31 0.00 0.00 6220.41 1979.73 18677.76 00:28:51.602 =================================================================================================================== 00:28:51.602 Total : 20559.25 80.31 0.00 0.00 6220.41 1979.73 18677.76 00:28:51.602 0 00:28:51.602 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:51.602 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:51.602 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:51.602 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:51.602 | .driver_specific 00:28:51.602 | .nvme_error 00:28:51.602 | .status_code 00:28:51.602 | .command_transient_transport_error' 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 836623 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 836623 ']' 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 836623 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 836623 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 836623' 00:28:51.863 killing process with pid 836623 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 836623 00:28:51.863 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.863 00:28:51.863 Latency(us) 00:28:51.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.863 =================================================================================================================== 00:28:51.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 836623 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=837311 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 837311 /var/tmp/bperf.sock 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 837311 ']' 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:51.863 12:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.863 [2024-06-10 12:32:57.443536] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:51.863 [2024-06-10 12:32:57.443594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837311 ] 00:28:51.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.863 Zero copy mechanism will not be used. 00:28:52.123 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.123 [2024-06-10 12:32:57.523948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.123 [2024-06-10 12:32:57.577375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.694 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:52.694 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:52.694 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.694 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.956 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.216 nvme0n1 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:53.216 12:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.477 Zero copy mechanism will not be used. 00:28:53.477 Running I/O for 2 seconds... 00:28:53.477 [2024-06-10 12:32:58.879398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.879429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.879438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.889630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.889651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.889663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.898170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.898189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.898201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.906997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.907015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.907022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.916559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.916584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.924832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.924850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.924856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.934768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.934786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.934792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.945347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.945365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.945371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.477 [2024-06-10 12:32:58.953346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.477 [2024-06-10 12:32:58.953364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.477 [2024-06-10 12:32:58.953370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:58.963137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:58.963155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:58.963161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:58.971037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:58.971058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:58.971065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:58.980921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:58.980938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:58.980945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:58.988516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:58.988533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:58.988540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:58.997479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:58.997497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:58.997505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.007508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.007525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.007532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.015179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.015208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.024441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.024459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.024466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.035155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.035173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.035180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.045679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.045697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.045703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.055161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.055179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.055185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.065033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.065050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.478 [2024-06-10 12:32:59.074324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.478 [2024-06-10 12:32:59.074342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.478 [2024-06-10 12:32:59.074348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.082287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.082305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.082312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.089728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.089745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.089751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.098132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.098150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.107996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.108014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.108020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.117412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.117430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.117436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.127789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.740 [2024-06-10 12:32:59.127807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.740 [2024-06-10 12:32:59.127816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.740 [2024-06-10 12:32:59.137787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.137805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.137811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.146150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.146169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.146176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.155527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.155546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.165777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.165796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.165803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.175908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.175928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.175934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.186025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.186044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.186051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.197070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.197088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.197095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.206021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.206038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.206044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.216924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.216946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.216952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.226109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.226128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.226135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.235460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.235479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.235485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.244291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.244310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.244316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.253101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.253120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.253127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.262428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.262447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.272087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.272106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.272113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.282353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.282371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.282378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.292511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.292531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.292538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.302579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.302598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.302604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.312190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.312213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.312220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.322464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.322482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.322488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.332777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.332796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.332802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.741 [2024-06-10 12:32:59.342774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:53.741 [2024-06-10 12:32:59.342793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.741 [2024-06-10 12:32:59.342799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.351791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.351810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.351817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.362162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.362181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.362187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.372552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.372571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.372577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.382884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.382902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.382915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.394305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.394324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.394330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.404704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.404723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.404729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.416372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.416391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.416398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.425649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.425668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.425675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.435762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.435781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.435787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.447711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.447730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.447737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.456790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.456809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.456815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.466989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.003 [2024-06-10 12:32:59.467008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.003 [2024-06-10 12:32:59.467015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.003 [2024-06-10 12:32:59.477825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.477847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.477853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.487792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.487810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.487816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.497581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.497600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.497606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.507850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.507869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.507875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.519213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.519231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.519237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.528463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.528481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.528487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.539050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.539068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.539075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.549926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.549944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.549951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.560579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.560598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.560604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.570086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.570104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.570111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.580452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.580470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.580477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.590525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.590543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.590550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.004 [2024-06-10 12:32:59.599956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.004 [2024-06-10 12:32:59.599974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.004 [2024-06-10 12:32:59.599981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.610863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.610882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.610889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.621094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.621112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.621119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.631365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.631384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.631391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.640587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.640606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.640612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.649981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.650000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.650009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.659693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.659711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.659718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.669247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.669266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.669272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.265 [2024-06-10 12:32:59.677929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.265 [2024-06-10 12:32:59.677948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.265 [2024-06-10 12:32:59.677954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.687772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.687796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.696159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.696177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.696183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.705858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.705876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.705882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.717463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.717482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.717488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.727622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.727641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.727647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.738017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.738036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.738043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.748585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.748603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.748609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.759153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.759171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.759178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.769157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.769176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.769183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.779038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.779057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.779063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.788173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.788190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.788201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.796595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.796614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.796620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.807085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.807110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.818283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.818301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.818310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.828665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.828684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.828690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.837514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.837533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.837539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.846745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.846764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.846771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.856204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.856222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.266 [2024-06-10 12:32:59.865434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.266 [2024-06-10 12:32:59.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.266 [2024-06-10 12:32:59.865458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.662 [2024-06-10 12:32:59.875427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.662 [2024-06-10 12:32:59.875445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.662 [2024-06-10 12:32:59.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.662 [2024-06-10 12:32:59.886599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.662 [2024-06-10 12:32:59.886617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.662 [2024-06-10 12:32:59.886624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.662 [2024-06-10 12:32:59.897507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.662 [2024-06-10 12:32:59.897526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.662 [2024-06-10 12:32:59.897532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.662 [2024-06-10 12:32:59.907524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.662 [2024-06-10 12:32:59.907546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.907552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.918252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.918270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.918276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.927282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.927300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.927306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.936621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.936640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.936646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.946031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.946055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.954354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.954378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.965271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.965288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.965295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.976336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.976355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.976361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.988175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.988199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.988206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:32:59.998081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:32:59.998099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:32:59.998106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.009207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.009228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.009235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.020146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.020165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.020172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.030972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.030991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.030998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.039377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.039396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.039402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.048320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.048339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.048345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.057568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.057593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.068905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.068923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.068929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.078213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.078231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.078241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.089188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.089211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.089217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.101173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.101191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.101203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.112806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.112825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.112831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.122768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.122785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.122792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.131527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.131545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.131552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.140922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.140947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.151403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.151421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.151427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.161670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.161694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.171555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.171577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.663 [2024-06-10 12:33:00.171583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.663 [2024-06-10 12:33:00.180279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.663 [2024-06-10 12:33:00.180297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.180303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.190377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.190395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.190402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.202686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.202704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.202711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.216030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.216047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.216054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.229469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.229487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.229494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.242490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.242509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.242515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.252189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.252211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.252218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.664 [2024-06-10 12:33:00.261256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.664 [2024-06-10 12:33:00.261275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.664 [2024-06-10 12:33:00.261281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.270817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.270842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.281778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.281796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.281803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.291247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.291265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.291272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.302074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.302092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.302099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.313129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.313147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.313153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.323248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.323266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.323272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.333999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.334017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.334024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.344188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.344211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.344218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.354037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.354055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.354065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.363309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.363327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.363334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.373385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.373403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.925 [2024-06-10 12:33:00.373410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.925 [2024-06-10 12:33:00.382312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.925 [2024-06-10 12:33:00.382330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.382336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.392665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.392683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.392689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.401935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.401953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.401960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.411440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.411457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.411464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.421064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.421083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.421089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.432644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.432662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.432669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.442253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.442273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.442280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.451521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.451540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.451546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.460028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.460046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.460053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.470409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.470428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.470434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.480469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.480487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.480494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.490842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.490861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.490867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.501683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.501701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.501708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.511278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.511296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.511303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.926 [2024-06-10 12:33:00.521034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:54.926 [2024-06-10 12:33:00.521052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.926 [2024-06-10 12:33:00.521059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.531007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.531025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.541528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.541548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.541555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.550528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.550546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.550553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.561616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.561635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.561641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.570978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.570996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.571002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.582332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.582351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.582357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.590869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.590887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.590893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.601353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.601371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.601377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.610145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.610173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.616473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.616490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.616496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.625165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.625183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.625190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.635219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.635238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.635244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.645398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.645416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.645423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.657617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.657635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.187 [2024-06-10 12:33:00.657642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.187 [2024-06-10 12:33:00.667949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.187 [2024-06-10 12:33:00.667967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.667974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.678743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.678761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.678767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.689148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.689166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.689172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.700575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.700600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.700607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.709942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.709960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.709967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.718986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.719004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.719010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.728210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.728228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.728235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.738126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.738144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.738150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.748729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.748754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.759209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.759233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.768714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.768732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.768738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.778380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.778398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.778405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.188 [2024-06-10 12:33:00.790904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.188 [2024-06-10 12:33:00.790922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.188 [2024-06-10 12:33:00.790929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.799681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.799699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.799705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.810399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.820811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.820830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.820836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.831081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.831099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.831106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.840531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.840549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.840555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.850466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.850492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.859144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.859163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.859169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.448 [2024-06-10 12:33:00.868968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x82ede0) 00:28:55.448 [2024-06-10 12:33:00.868986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.448 [2024-06-10 12:33:00.868996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.448 00:28:55.448 Latency(us) 00:28:55.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.448 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:55.448 nvme0n1 : 2.00 3124.37 390.55 0.00 0.00 5118.86 1078.61 13871.79 00:28:55.448 =================================================================================================================== 00:28:55.449 Total : 3124.37 390.55 0.00 0.00 5118.86 1078.61 13871.79 00:28:55.449 0 00:28:55.449 12:33:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:55.449 12:33:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:55.449 12:33:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:55.449 | .driver_specific 00:28:55.449 | .nvme_error 00:28:55.449 | .status_code 00:28:55.449 | .command_transient_transport_error' 00:28:55.449 12:33:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 837311 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 837311 ']' 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 837311 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 837311 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 837311' 00:28:55.709 killing process with pid 837311 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 837311 00:28:55.709 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.709 00:28:55.709 Latency(us) 00:28:55.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.709 =================================================================================================================== 00:28:55.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 837311 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=838195 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 838195 /var/tmp/bperf.sock 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 838195 ']' 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:55.709 12:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.709 [2024-06-10 12:33:01.276150] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:55.709 [2024-06-10 12:33:01.276219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838195 ] 00:28:55.709 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.970 [2024-06-10 12:33:01.357914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.970 [2024-06-10 12:33:01.411335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.542 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:56.542 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:56.542 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.542 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.803 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.064 nvme0n1 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.064 12:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.324 Running I/O for 2 seconds... 00:28:57.324 [2024-06-10 12:33:02.696810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e8088 00:28:57.324 [2024-06-10 12:33:02.698453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.698484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.708785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ebfd0 00:28:57.324 [2024-06-10 12:33:02.710438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.710456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.718733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f3e60 00:28:57.324 [2024-06-10 12:33:02.719890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.731335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190eff18 00:28:57.324 [2024-06-10 12:33:02.732486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.732504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.743142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190eee38 00:28:57.324 [2024-06-10 12:33:02.744285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.744302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.756693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190edd58 00:28:57.324 [2024-06-10 12:33:02.758498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.758514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.767001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e3498 00:28:57.324 [2024-06-10 12:33:02.768173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.768190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.778808] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e23b8 00:28:57.324 [2024-06-10 12:33:02.779974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.779991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.790602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e12d8 00:28:57.324 [2024-06-10 12:33:02.791769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.791786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.802527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f5378 00:28:57.324 [2024-06-10 12:33:02.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.803712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.814323] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f6458 00:28:57.324 [2024-06-10 12:33:02.815477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.815494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.826085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f7538 00:28:57.324 [2024-06-10 12:33:02.827266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.827283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.837852] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f8618 00:28:57.324 [2024-06-10 12:33:02.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.839037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.849610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ff3c8 00:28:57.324 [2024-06-10 12:33:02.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.850789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.861392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ebfd0 00:28:57.324 [2024-06-10 12:33:02.862519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.862535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.873139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.324 [2024-06-10 12:33:02.874277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.874293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.884887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.324 [2024-06-10 12:33:02.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.886065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.896646] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.324 [2024-06-10 12:33:02.897804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.908403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.324 [2024-06-10 12:33:02.909569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.909586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.324 [2024-06-10 12:33:02.920137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.324 [2024-06-10 12:33:02.921276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.324 [2024-06-10 12:33:02.921293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.931873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.933028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.933044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.943594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.944752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.955329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.956444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.956460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.967056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.968219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.968236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.978796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.979951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.979968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:02.990527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:02.991687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:02.991704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.002269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.003429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.003445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.013998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.015153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.015169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.025733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.026896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.026913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.037464] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.038620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.038636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.049184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.050345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.050360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.060911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.062082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.072691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.073851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.073868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.084423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.085578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.085595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.096165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.097317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.097332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.107934] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.109095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.109114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.119666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.120843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.131389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.132554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.132571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.143126] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.144300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.154853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.156014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.156030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.166585] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.167703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.167719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.178308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.586 [2024-06-10 12:33:03.179428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.586 [2024-06-10 12:33:03.179445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.586 [2024-06-10 12:33:03.190036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.191148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.191164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.201774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.202932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.202948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.213516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.214638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.214655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.225253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.226376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.226392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.236999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.238157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.238174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.248725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.249882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.249898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.260465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.261624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.261640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.272186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.273345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.273361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.283933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.285091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.285108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.295662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.296821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.296838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.307421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.308575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.308591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.319130] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.320289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.320305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.330879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.847 [2024-06-10 12:33:03.332033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.847 [2024-06-10 12:33:03.332049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.847 [2024-06-10 12:33:03.342610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.343769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.354361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.355515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.355532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.366074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.367224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.367240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.377790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.378947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.378963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.389494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.390608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.390624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.401240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.402396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.402412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.412970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.414131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.424690] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.425807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.425823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.436404] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.437562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.437578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.848 [2024-06-10 12:33:03.448125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:57.848 [2024-06-10 12:33:03.449275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.848 [2024-06-10 12:33:03.449291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.459849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.461000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.461016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.471580] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.472737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.472753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.483308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.484467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.484483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.495028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.496180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.496199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.506747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.507909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.507924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.518516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.519674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.519690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.530251] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.531372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.531388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.541985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.543139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.543155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.553703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.554862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.554878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.565424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.566575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.577142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.578382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.578398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.588971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.590127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.590143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.600720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.601875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.601891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.612449] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.613604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.613620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.624150] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.625278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.625294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.635885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.637041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.637057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.647654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.648815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.648832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.659386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.660541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.660556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.671108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.672265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.672281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.682815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.683971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.683987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.109 [2024-06-10 12:33:03.694538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.109 [2024-06-10 12:33:03.695700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.109 [2024-06-10 12:33:03.695716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.110 [2024-06-10 12:33:03.706281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.110 [2024-06-10 12:33:03.707393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.110 [2024-06-10 12:33:03.707409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.718021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ff3c8 00:28:58.371 [2024-06-10 12:33:03.719172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.719188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.729798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f8618 00:28:58.371 [2024-06-10 12:33:03.730951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.741567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f7538 00:28:58.371 [2024-06-10 12:33:03.742719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.742735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.753537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f6458 00:28:58.371 [2024-06-10 12:33:03.754694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.754710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.765286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f5378 00:28:58.371 [2024-06-10 12:33:03.766405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.766421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.777023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e12d8 00:28:58.371 [2024-06-10 12:33:03.778175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.778190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.788781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e23b8 00:28:58.371 [2024-06-10 12:33:03.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.800549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190dfdc0 00:28:58.371 [2024-06-10 12:33:03.801668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.801684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.812353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.813500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.813516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.824085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.825233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.825251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.835862] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.837006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.847597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.848745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.848761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.859351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.860508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.860524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.871059] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.872207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.872223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.882800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.883950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.883966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.894532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.895681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.895697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.906304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.907452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.907468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.918042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.919184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.919204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.929814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.930965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.941527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.942672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.942688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.953281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.954387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.954403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.371 [2024-06-10 12:33:03.965010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.371 [2024-06-10 12:33:03.966158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.371 [2024-06-10 12:33:03.966174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.632 [2024-06-10 12:33:03.976755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.632 [2024-06-10 12:33:03.977899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.632 [2024-06-10 12:33:03.977915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.632 [2024-06-10 12:33:03.988504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.632 [2024-06-10 12:33:03.989641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.632 [2024-06-10 12:33:03.989657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.632 [2024-06-10 12:33:04.000244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.632 [2024-06-10 12:33:04.001403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.632 [2024-06-10 12:33:04.001419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.632 [2024-06-10 12:33:04.011954] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.632 [2024-06-10 12:33:04.013098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.632 [2024-06-10 12:33:04.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.632 [2024-06-10 12:33:04.023685] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.632 [2024-06-10 12:33:04.024831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.632 [2024-06-10 12:33:04.024847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.035421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.036538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.047159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.048274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.048291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.058884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.060033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.070613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.071757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.071773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.082326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.083468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.094060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.095176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.095192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.105805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.106959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.117547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.118702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.118718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.129269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.130413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.130433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.141025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.142173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.142190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.152742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.153854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.153870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.164487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.165632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.165647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.176207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.177351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.187934] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.189040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.189056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.199656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.200806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.200822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.211380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.212525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.212541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.223103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.224249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.224265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.633 [2024-06-10 12:33:04.234841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.633 [2024-06-10 12:33:04.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.633 [2024-06-10 12:33:04.236002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.246566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.896 [2024-06-10 12:33:04.247673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.247689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.258301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.896 [2024-06-10 12:33:04.259448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.259464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.270019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.896 [2024-06-10 12:33:04.271174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.271189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.281745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.896 [2024-06-10 12:33:04.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.282865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.293479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee5c8 00:28:58.896 [2024-06-10 12:33:04.294627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.294643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.304637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190fb8b8 00:28:58.896 [2024-06-10 12:33:04.305757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.305773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.317511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e7c50 00:28:58.896 [2024-06-10 12:33:04.318781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.318798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.329265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e8d30 00:28:58.896 [2024-06-10 12:33:04.330571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.330586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.341048] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e9e10 00:28:58.896 [2024-06-10 12:33:04.342377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.342393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.352830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190eaef0 00:28:58.896 [2024-06-10 12:33:04.354135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.354151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.364611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e5ec8 00:28:58.896 [2024-06-10 12:33:04.365915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.365931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.376401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e4de8 00:28:58.896 [2024-06-10 12:33:04.377709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.377725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.388164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190df550 00:28:58.896 [2024-06-10 12:33:04.389478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.389495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.399945] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f3a28 00:28:58.896 [2024-06-10 12:33:04.401258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.411729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f4b08 00:28:58.896 [2024-06-10 12:33:04.413039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.413056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.423489] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ef270 00:28:58.896 [2024-06-10 12:33:04.424793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.424809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.435274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ee190 00:28:58.896 [2024-06-10 12:33:04.436580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.436599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.447032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190ed0b0 00:28:58.896 [2024-06-10 12:33:04.448343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.448360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.458811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190fdeb0 00:28:58.896 [2024-06-10 12:33:04.460116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.460132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.470601] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f0bc0 00:28:58.896 [2024-06-10 12:33:04.471879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.471894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.482353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f1ca0 00:28:58.896 [2024-06-10 12:33:04.483660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.483676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.896 [2024-06-10 12:33:04.494136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f2d80 00:28:58.896 [2024-06-10 12:33:04.495453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.896 [2024-06-10 12:33:04.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.158 [2024-06-10 12:33:04.505908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e7818 00:28:59.158 [2024-06-10 12:33:04.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.158 [2024-06-10 12:33:04.507234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.158 [2024-06-10 12:33:04.517699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e88f8 00:28:59.159 [2024-06-10 12:33:04.518985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.519001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.529440] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e99d8 00:28:59.159 [2024-06-10 12:33:04.530748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.530764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.541206] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190eaab8 00:28:59.159 [2024-06-10 12:33:04.542529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.542546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.552968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e6300 00:28:59.159 [2024-06-10 12:33:04.554300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.554316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.564787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190e5220 00:28:59.159 [2024-06-10 12:33:04.566097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.566114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.575785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.577078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.588278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.589578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.589594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.600030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.601328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.611883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.613191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.623643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.624941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.635395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.636691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.636707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.647128] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.648435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.658864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.660165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.660181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.670608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.671909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.671926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 [2024-06-10 12:33:04.682336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c07f0) with pdu=0x2000190f92c0 00:28:59.159 [2024-06-10 12:33:04.683635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.159 [2024-06-10 12:33:04.683651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:59.159 00:28:59.159 Latency(us) 00:28:59.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.159 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.159 nvme0n1 : 2.00 21708.89 84.80 0.00 0.00 5888.48 2266.45 13871.79 00:28:59.159 =================================================================================================================== 00:28:59.159 Total : 21708.89 84.80 0.00 0.00 5888.48 2266.45 13871.79 00:28:59.159 0 00:28:59.159 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.159 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.159 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.159 | .driver_specific 00:28:59.159 | .nvme_error 00:28:59.159 | .status_code 00:28:59.159 | .command_transient_transport_error' 00:28:59.159 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 838195 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 838195 ']' 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 838195 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 838195 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 838195' 00:28:59.423 killing process with pid 838195 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 838195 00:28:59.423 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.423 00:28:59.423 Latency(us) 00:28:59.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.423 =================================================================================================================== 00:28:59.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.423 12:33:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 838195 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=839068 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 839068 /var/tmp/bperf.sock 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 839068 ']' 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:59.684 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.684 [2024-06-10 12:33:05.090919] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:28:59.684 [2024-06-10 12:33:05.090977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839068 ] 00:28:59.684 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.684 Zero copy mechanism will not be used. 00:28:59.684 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.684 [2024-06-10 12:33:05.169491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.684 [2024-06-10 12:33:05.222543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.627 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:00.627 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:00.627 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.627 12:33:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.627 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.888 nvme0n1 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:00.888 12:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.149 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.149 Zero copy mechanism will not be used. 00:29:01.149 Running I/O for 2 seconds... 00:29:01.149 [2024-06-10 12:33:06.587659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.588007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.588035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.598633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.599001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.599021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.611340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.611668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.611686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.621836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.622104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.632570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.632929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.632948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.642491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.642802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.642824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.654339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.654655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.654672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.666220] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.666560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.666577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.673069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.673424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.673442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.682211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.682561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.682578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.689973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.690320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.690338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.698259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.698592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.698610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.709283] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.709376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.709392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.717970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.718219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.728145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.728526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.728543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.734000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.738946] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.739265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.739282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.743880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.744171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.744189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.149 [2024-06-10 12:33:06.749815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.149 [2024-06-10 12:33:06.750042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.149 [2024-06-10 12:33:06.750058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.756355] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.756557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.756573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.762477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.762815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.762832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.767525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.767847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.767864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.773986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.774321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.774338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.779373] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.779573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.779589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.784166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.784480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.784497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.790463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.790662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.790678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.800262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.800517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.800533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.809227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.809464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.809480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.817011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.817409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.824042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.824342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.824360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.830637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.830931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.830950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.835798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.836109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.841260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.841541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.841558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.846751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.846955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.846971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.851508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.851836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.851853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.857772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.858075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.858093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.863091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.863293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.412 [2024-06-10 12:33:06.863309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.412 [2024-06-10 12:33:06.867274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.412 [2024-06-10 12:33:06.867471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.867487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.871204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.871399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.871416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.875755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.875951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.875967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.882950] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.883193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.883215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.889232] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.895331] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.895529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.902794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.902992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.903008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.910217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.910461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.919948] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.920247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.920265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.925731] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.926093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.926110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.932923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.933259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.933276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.940131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.940457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.940475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.947754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.947958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.947975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.954734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.955045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.955062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.961861] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.962182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.962205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.968864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.969164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.975500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.975858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.975875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.981077] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.981280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.981296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.986546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.986858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.986876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.991755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.992038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.992055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:06.996901] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:06.997099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:06.997119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:07.003265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:07.003554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:07.003571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.413 [2024-06-10 12:33:07.009746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.413 [2024-06-10 12:33:07.009977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.413 [2024-06-10 12:33:07.009994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.018934] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.019133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.019149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.023466] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.023797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.023814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.029304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.029680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.029697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.036027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.036324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.036341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.042077] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.042315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.049639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.049974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.059132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.059498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.059515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.069830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.070221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.070238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.079515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.079938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.079956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.088679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.089066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.089083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.098511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.098868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.107789] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.108119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.118894] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.119269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.119286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.127476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.127765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.127783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.135725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.136102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.136119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.143330] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.143683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.143700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.150358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.150598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.150617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.161461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.161696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.161713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.172070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.172287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.172303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.676 [2024-06-10 12:33:07.182459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.676 [2024-06-10 12:33:07.182823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.676 [2024-06-10 12:33:07.182841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.191852] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.192209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.192226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.199495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.199892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.199909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.208979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.209181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.209203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.215441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.215825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.222881] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.223302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.223319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.231089] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.231293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.231310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.239678] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.240065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.240084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.251170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.251385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.251402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.260458] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.260847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.260865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.677 [2024-06-10 12:33:07.270305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.677 [2024-06-10 12:33:07.270640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.677 [2024-06-10 12:33:07.270656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.280138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.280506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.280523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.285939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.286317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.286334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.291633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.291977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.291995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.299623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.299939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.299957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.306362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.306628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.306645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.311417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.311718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.311735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.317424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.317756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.317773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.322171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.322380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.322396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.327314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.327607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.327624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.333815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.334012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.334028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.340253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.340451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.345967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.346330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.352938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.353261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.353280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.360428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.360707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.360725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.368473] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.368783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.938 [2024-06-10 12:33:07.368800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.938 [2024-06-10 12:33:07.374110] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.938 [2024-06-10 12:33:07.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.374451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.382776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.383150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.391086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.391384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.391401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.398374] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.398682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.398700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.405480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.405869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.405886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.412103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.412438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.412456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.420001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.420343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.420361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.426422] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.426723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.426740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.432560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.432777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.437841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.438040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.438056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.445265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.445551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.445569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.451925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.452207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.452225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.458692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.459005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.467751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.467961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.467977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.475063] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.475289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.475305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.483315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.483634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.483651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.490819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.491124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.491141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.497312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.497633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.497650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.503292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.503492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.503508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.510471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.510775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.510792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.516254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.516575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.516592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.522419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.522684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.522705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.529519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.529717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.529733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.939 [2024-06-10 12:33:07.536582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:01.939 [2024-06-10 12:33:07.536871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.939 [2024-06-10 12:33:07.536888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.544638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.545014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.545032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.549815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.550015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.550031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.555612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.555883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.555900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.562411] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.562704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.562721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.569086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.569400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.569418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.575305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.575650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.575667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.582208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.582415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.582431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.589305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.589625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.589641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.596414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.596614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.603634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.603927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.201 [2024-06-10 12:33:07.603944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.201 [2024-06-10 12:33:07.610317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.201 [2024-06-10 12:33:07.610641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.616733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.617119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.617137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.624085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.624444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.624462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.631431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.631772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.631790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.638065] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.638268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.638285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.643364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.643563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.643579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.649274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.649467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.649483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.658764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.659069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.664739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.664955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.669263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.669462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.669479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.673884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.674083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.674099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.680940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.681139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.681155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.685746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.686065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.686082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.690809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.691008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.691026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.696137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.696479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.696496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.703773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.704136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.704154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.712385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.712694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.720630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.720904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.720921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.729734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.730089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.739179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.739573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.739590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.748288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.748699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.756911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.757315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.757333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.765803] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.766210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.766228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.775133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.775546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.775563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.784400] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.202 [2024-06-10 12:33:07.784759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.202 [2024-06-10 12:33:07.784776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.202 [2024-06-10 12:33:07.791454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.203 [2024-06-10 12:33:07.791763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.203 [2024-06-10 12:33:07.791780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.203 [2024-06-10 12:33:07.800148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.203 [2024-06-10 12:33:07.800492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.203 [2024-06-10 12:33:07.800510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.807419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.807777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.807794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.815447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.815945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.824212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.824515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.824532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.831342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.831679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.831696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.840490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.840807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.840824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.847154] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.847467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.852716] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.853069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.853086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.858596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.858911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.858929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.866492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.866793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.866810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.873122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.873451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.873469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.878471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.878671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.878687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.888603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.888927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.894151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.894357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.894376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.899654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.899904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.899921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.907284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.907631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.914644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.914977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.464 [2024-06-10 12:33:07.914994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.464 [2024-06-10 12:33:07.923585] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.464 [2024-06-10 12:33:07.923889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.923906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.930454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.930883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.930900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.942400] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.942706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.942724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.952706] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.953178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.953200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.965163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.965614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.965631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.976398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.976751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.976768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:07.988369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:07.988801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:07.988818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.000092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.000465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.000483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.012777] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.013198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.013215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.024498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.024782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.036013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.036461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.036478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.048393] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.048768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.048786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.465 [2024-06-10 12:33:08.059997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.465 [2024-06-10 12:33:08.060254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.465 [2024-06-10 12:33:08.060272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.727 [2024-06-10 12:33:08.069296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.727 [2024-06-10 12:33:08.069502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-06-10 12:33:08.069518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.727 [2024-06-10 12:33:08.078082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.727 [2024-06-10 12:33:08.078507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-06-10 12:33:08.078524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.088979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.089320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.089338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.096584] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.096882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.096899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.102329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.102693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.107477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.107818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.107836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.114072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.114395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.114412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.120376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.120680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.120696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.126343] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.126544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.126560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.135050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.135406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.141937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.142360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.142378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.147484] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.147686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.147702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.157015] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.157335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.157352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.164245] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.164499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.164515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.171678] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.172096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.172114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.178271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.178578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.178596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.187441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.187784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.187801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.194898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.195238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.195255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.203291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.203661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.203679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.209568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.209776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.214845] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.215047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.215063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.221084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.221363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.229327] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.229527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.229543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.237312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.237644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.237662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.244943] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.245259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.245276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.250974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.251317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.251335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.255830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.256120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.256137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.263680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.264019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-06-10 12:33:08.264036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.728 [2024-06-10 12:33:08.270670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.728 [2024-06-10 12:33:08.271023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.271040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.277598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.277799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.277815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.284658] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.284938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.284955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.289735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.289934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.289950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.295543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.295845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.295862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.303114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.303444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.303461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.312109] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.312463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.321318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.321698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.321718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.729 [2024-06-10 12:33:08.330480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.729 [2024-06-10 12:33:08.330703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-06-10 12:33:08.330720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.339506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.339846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.339863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.348849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.349253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.349270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.359070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.359448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.359466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.368797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.369008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.369025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.378508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.378809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.378827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.389644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.389984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.398736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.399157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.399176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.408532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.408756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.408773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.418046] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.418341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.418357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.428571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.428989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.429007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.438007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.438230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.438246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.447178] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.447511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.447528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.457264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.457646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.457664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.466515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.466905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.466923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.475737] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.475950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.475967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.485333] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.485656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.485673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.494922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.495301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.495319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.504039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.504436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.504453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.512646] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.513001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.513018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.521691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.522048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.531877] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.541426] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.541804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.541820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.547909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.548334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.548352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.557735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.558074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.558091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.568693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.990 [2024-06-10 12:33:08.568898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.990 [2024-06-10 12:33:08.568920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.990 [2024-06-10 12:33:08.580047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6c0b30) with pdu=0x2000190fef90 00:29:02.991 [2024-06-10 12:33:08.580239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.991 [2024-06-10 12:33:08.580256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.991 00:29:02.991 Latency(us) 00:29:02.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:02.991 nvme0n1 : 2.01 3989.33 498.67 0.00 0.00 4002.36 1843.20 12834.13 00:29:02.991 =================================================================================================================== 00:29:02.991 Total : 3989.33 498.67 0.00 0.00 4002.36 1843.20 12834.13 00:29:02.991 0 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.251 | .driver_specific 00:29:03.251 | .nvme_error 00:29:03.251 | .status_code 00:29:03.251 | .command_transient_transport_error' 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 258 > 0 )) 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 839068 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 839068 ']' 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 839068 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 839068 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 839068' 00:29:03.251 killing process with pid 839068 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 839068 00:29:03.251 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.251 00:29:03.251 Latency(us) 00:29:03.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.251 =================================================================================================================== 00:29:03.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.251 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 839068 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 836556 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 836556 ']' 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 836556 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 836556 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 836556' 00:29:03.512 killing process with pid 836556 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 836556 00:29:03.512 12:33:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 836556 00:29:03.773 00:29:03.773 real 0m16.397s 00:29:03.773 user 0m32.262s 00:29:03.773 sys 0m3.326s 00:29:03.773 12:33:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:03.773 12:33:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.773 ************************************ 00:29:03.773 END TEST nvmf_digest_error 00:29:03.773 ************************************ 00:29:03.773 12:33:09 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.774 rmmod nvme_tcp 00:29:03.774 rmmod nvme_fabrics 00:29:03.774 rmmod nvme_keyring 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 836556 ']' 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 836556 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 836556 ']' 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 836556 00:29:03.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (836556) - No such process 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 836556 is not found' 00:29:03.774 Process with pid 836556 is not found 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.774 12:33:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.319 12:33:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.319 00:29:06.319 real 0m42.842s 00:29:06.319 user 1m6.083s 00:29:06.319 sys 0m12.733s 00:29:06.319 12:33:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:06.319 12:33:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:06.319 ************************************ 00:29:06.319 END TEST nvmf_digest 00:29:06.319 ************************************ 00:29:06.319 12:33:11 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:06.319 12:33:11 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:06.319 12:33:11 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:06.319 12:33:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:06.319 12:33:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:06.319 12:33:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:06.319 12:33:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.319 ************************************ 00:29:06.319 START TEST nvmf_bdevperf 00:29:06.319 ************************************ 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:06.319 * Looking for test storage... 00:29:06.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:06.319 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:06.320 12:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:14.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.461 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:14.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:14.462 Found net devices under 0000:31:00.0: cvl_0_0 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:14.462 Found net devices under 0000:31:00.1: cvl_0_1 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:29:14.462 00:29:14.462 --- 10.0.0.2 ping statistics --- 00:29:14.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.462 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:29:14.462 00:29:14.462 --- 10.0.0.1 ping statistics --- 00:29:14.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.462 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=844909 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 844909 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 844909 ']' 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:14.462 12:33:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.462 [2024-06-10 12:33:19.856209] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:14.462 [2024-06-10 12:33:19.856270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.462 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.462 [2024-06-10 12:33:19.951473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:14.462 [2024-06-10 12:33:20.048939] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.462 [2024-06-10 12:33:20.049000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.462 [2024-06-10 12:33:20.049009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.462 [2024-06-10 12:33:20.049016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.462 [2024-06-10 12:33:20.049022] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.462 [2024-06-10 12:33:20.049168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.462 [2024-06-10 12:33:20.049331] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.462 [2024-06-10 12:33:20.049473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 [2024-06-10 12:33:20.685121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 Malloc0 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.406 [2024-06-10 12:33:20.751404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.406 { 00:29:15.406 "params": { 00:29:15.406 "name": "Nvme$subsystem", 00:29:15.406 "trtype": "$TEST_TRANSPORT", 00:29:15.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.406 "adrfam": "ipv4", 00:29:15.406 "trsvcid": "$NVMF_PORT", 00:29:15.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.406 "hdgst": ${hdgst:-false}, 00:29:15.406 "ddgst": ${ddgst:-false} 00:29:15.406 }, 00:29:15.406 "method": "bdev_nvme_attach_controller" 00:29:15.406 } 00:29:15.406 EOF 00:29:15.406 )") 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:15.406 12:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.406 "params": { 00:29:15.406 "name": "Nvme1", 00:29:15.406 "trtype": "tcp", 00:29:15.406 "traddr": "10.0.0.2", 00:29:15.406 "adrfam": "ipv4", 00:29:15.406 "trsvcid": "4420", 00:29:15.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:15.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:15.406 "hdgst": false, 00:29:15.406 "ddgst": false 00:29:15.406 }, 00:29:15.406 "method": "bdev_nvme_attach_controller" 00:29:15.406 }' 00:29:15.406 [2024-06-10 12:33:20.803470] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:15.406 [2024-06-10 12:33:20.803518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845071 ] 00:29:15.406 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.406 [2024-06-10 12:33:20.858519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.406 [2024-06-10 12:33:20.912605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.666 Running I/O for 1 seconds... 00:29:16.608 00:29:16.608 Latency(us) 00:29:16.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.608 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.608 Verification LBA range: start 0x0 length 0x4000 00:29:16.608 Nvme1n1 : 1.01 9046.99 35.34 0.00 0.00 14099.73 2717.01 12997.97 00:29:16.608 =================================================================================================================== 00:29:16.608 Total : 9046.99 35.34 0.00 0.00 14099.73 2717.01 12997.97 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=845282 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.608 { 00:29:16.608 "params": { 00:29:16.608 "name": "Nvme$subsystem", 00:29:16.608 "trtype": "$TEST_TRANSPORT", 00:29:16.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.608 "adrfam": "ipv4", 00:29:16.608 "trsvcid": "$NVMF_PORT", 00:29:16.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.608 "hdgst": ${hdgst:-false}, 00:29:16.608 "ddgst": ${ddgst:-false} 00:29:16.608 }, 00:29:16.608 "method": "bdev_nvme_attach_controller" 00:29:16.608 } 00:29:16.608 EOF 00:29:16.608 )") 00:29:16.608 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:16.869 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:16.869 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:16.869 12:33:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:16.869 "params": { 00:29:16.869 "name": "Nvme1", 00:29:16.869 "trtype": "tcp", 00:29:16.869 "traddr": "10.0.0.2", 00:29:16.869 "adrfam": "ipv4", 00:29:16.869 "trsvcid": "4420", 00:29:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.869 "hdgst": false, 00:29:16.869 "ddgst": false 00:29:16.869 }, 00:29:16.869 "method": "bdev_nvme_attach_controller" 00:29:16.869 }' 00:29:16.869 [2024-06-10 12:33:22.250078] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:16.869 [2024-06-10 12:33:22.250139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845282 ] 00:29:16.869 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.869 [2024-06-10 12:33:22.315374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.869 [2024-06-10 12:33:22.379864] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.129 Running I/O for 15 seconds... 00:29:19.672 12:33:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 844909 00:29:19.672 12:33:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:19.672 [2024-06-10 12:33:25.215291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.215990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.215997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.216007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.216014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.216024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.216031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.672 [2024-06-10 12:33:25.216040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.672 [2024-06-10 12:33:25.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.673 [2024-06-10 12:33:25.216699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.673 [2024-06-10 12:33:25.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.673 [2024-06-10 12:33:25.216800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.216988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.674 [2024-06-10 12:33:25.217405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.674 [2024-06-10 12:33:25.217412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.675 [2024-06-10 12:33:25.217529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.675 [2024-06-10 12:33:25.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819f60 is same with the state(5) to be set 00:29:19.675 [2024-06-10 12:33:25.217648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:19.675 [2024-06-10 12:33:25.217654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:19.675 [2024-06-10 12:33:25.217661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113544 len:8 PRP1 0x0 PRP2 0x0 00:29:19.675 [2024-06-10 12:33:25.217669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.675 [2024-06-10 12:33:25.217707] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1819f60 was disconnected and freed. reset controller. 00:29:19.675 [2024-06-10 12:33:25.221241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.675 [2024-06-10 12:33:25.221287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.675 [2024-06-10 12:33:25.222100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.675 [2024-06-10 12:33:25.222116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.675 [2024-06-10 12:33:25.222124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.675 [2024-06-10 12:33:25.222351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.675 [2024-06-10 12:33:25.222573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.675 [2024-06-10 12:33:25.222583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.675 [2024-06-10 12:33:25.222592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.675 [2024-06-10 12:33:25.226145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.675 [2024-06-10 12:33:25.235367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.675 [2024-06-10 12:33:25.236036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.675 [2024-06-10 12:33:25.236074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.675 [2024-06-10 12:33:25.236085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.675 [2024-06-10 12:33:25.236336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.675 [2024-06-10 12:33:25.236561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.675 [2024-06-10 12:33:25.236571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.675 [2024-06-10 12:33:25.236578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.675 [2024-06-10 12:33:25.240136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.675 [2024-06-10 12:33:25.249352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.675 [2024-06-10 12:33:25.250018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.675 [2024-06-10 12:33:25.250056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.675 [2024-06-10 12:33:25.250067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.675 [2024-06-10 12:33:25.250317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.675 [2024-06-10 12:33:25.250541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.675 [2024-06-10 12:33:25.250550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.675 [2024-06-10 12:33:25.250558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.675 [2024-06-10 12:33:25.254111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.675 [2024-06-10 12:33:25.263342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.675 [2024-06-10 12:33:25.264006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.675 [2024-06-10 12:33:25.264045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.675 [2024-06-10 12:33:25.264056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.675 [2024-06-10 12:33:25.264306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.675 [2024-06-10 12:33:25.264531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.675 [2024-06-10 12:33:25.264540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.675 [2024-06-10 12:33:25.264547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.675 [2024-06-10 12:33:25.268107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.277345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.278011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.278049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.278060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.278308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.278532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.278542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.278550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.282112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.291345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.292037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.292075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.292085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.292333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.292558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.292567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.292574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.296128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.305140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.305689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.305708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.305716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.305941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.306162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.306171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.306178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.309736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.318949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.319516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.319532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.319539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.319759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.319979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.319988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.319995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.323548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.332764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.333332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.333349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.333356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.333576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.333796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.333805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.333812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.337362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.346570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.347223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.347260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.347273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.347515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.347739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.347750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.347761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.351330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.360382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.937 [2024-06-10 12:33:25.361049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.937 [2024-06-10 12:33:25.361087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.937 [2024-06-10 12:33:25.361097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.937 [2024-06-10 12:33:25.361346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.937 [2024-06-10 12:33:25.361571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.937 [2024-06-10 12:33:25.361581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.937 [2024-06-10 12:33:25.361588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.937 [2024-06-10 12:33:25.365142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.937 [2024-06-10 12:33:25.374354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.374917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.374953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.374964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.375214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.375439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.375448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.375456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.379012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.388230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.388860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.388897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.388908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.389147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.389382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.389392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.389400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.392954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.402170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.402808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.402846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.402857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.403096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.403330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.403341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.403348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.406904] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.416116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.416807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.416845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.416855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.417094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.417329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.417339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.417346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.420901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.430115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.430792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.430829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.430840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.431078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.431312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.431323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.431330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.434887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.444103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.444743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.444781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.444792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.445035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.938 [2024-06-10 12:33:25.445270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.938 [2024-06-10 12:33:25.445280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.938 [2024-06-10 12:33:25.445287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.938 [2024-06-10 12:33:25.448842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.938 [2024-06-10 12:33:25.458072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.938 [2024-06-10 12:33:25.458648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.938 [2024-06-10 12:33:25.458667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.938 [2024-06-10 12:33:25.458675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.938 [2024-06-10 12:33:25.458895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.459115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.459123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.459130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.462688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.939 [2024-06-10 12:33:25.471896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.939 [2024-06-10 12:33:25.472473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.939 [2024-06-10 12:33:25.472489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.939 [2024-06-10 12:33:25.472497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.939 [2024-06-10 12:33:25.472716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.472936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.472946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.472953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.476513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.939 [2024-06-10 12:33:25.485739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.939 [2024-06-10 12:33:25.486416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.939 [2024-06-10 12:33:25.486454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.939 [2024-06-10 12:33:25.486464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.939 [2024-06-10 12:33:25.486703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.486927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.486937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.486948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.490517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.939 [2024-06-10 12:33:25.499537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.939 [2024-06-10 12:33:25.500161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.939 [2024-06-10 12:33:25.500206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.939 [2024-06-10 12:33:25.500217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.939 [2024-06-10 12:33:25.500456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.500680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.500689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.500697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.504260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.939 [2024-06-10 12:33:25.513480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.939 [2024-06-10 12:33:25.514175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.939 [2024-06-10 12:33:25.514220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.939 [2024-06-10 12:33:25.514233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.939 [2024-06-10 12:33:25.514473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.514697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.514706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.514713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.518278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.939 [2024-06-10 12:33:25.527302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.939 [2024-06-10 12:33:25.527945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.939 [2024-06-10 12:33:25.527982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:19.939 [2024-06-10 12:33:25.527993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:19.939 [2024-06-10 12:33:25.528241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:19.939 [2024-06-10 12:33:25.528466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.939 [2024-06-10 12:33:25.528475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.939 [2024-06-10 12:33:25.528482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.939 [2024-06-10 12:33:25.532043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.541285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.541831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.541854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.541863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.542083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.542311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.542320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.542327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.545884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.555121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.555640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.555677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.555688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.555927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.556151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.556160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.556169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.559729] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.568979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.569666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.569704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.569715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.569955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.570178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.570188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.570204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.573758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.582972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.583532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.583570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.583581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.583820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.584049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.584058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.584066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.587634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.596949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.597612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.597650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.597660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.597899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.598123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.598133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.598140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.601706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.610934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.611591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.611628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.611639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.611878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.612102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.612112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.612119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.615688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.624913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.625579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.625616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.625627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.625866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.626090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.626100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.626107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.629677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.638895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.639571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.639608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.639619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.639858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.640082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.640092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.640099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.201 [2024-06-10 12:33:25.643664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.201 [2024-06-10 12:33:25.652895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.201 [2024-06-10 12:33:25.653563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.201 [2024-06-10 12:33:25.653601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.201 [2024-06-10 12:33:25.653612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.201 [2024-06-10 12:33:25.653851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.201 [2024-06-10 12:33:25.654075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.201 [2024-06-10 12:33:25.654085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.201 [2024-06-10 12:33:25.654092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.657669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.666901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.667569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.667607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.667618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.667857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.668081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.668091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.668098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.671667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.680902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.681576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.681615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.681630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.681869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.682093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.682102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.682110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.685672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.694890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.695541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.695579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.695590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.695829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.696053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.696063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.696071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.699635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.708858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.709374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.709394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.709401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.709622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.709842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.709851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.709858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.713420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.722853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.723431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.723448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.723455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.723674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.723895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.723909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.723916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.727475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.736704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.737282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.737306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.737314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.737533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.737753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.737762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.737769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.741327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.750583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.751060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.751076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.751084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.751309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.751529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.751538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.751545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.755104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.764557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.765134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.765150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.765157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.765381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.765602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.765611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.765618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.769171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.778440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.779117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.779155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.779166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.779412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.779637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.779647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.779655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.783219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.202 [2024-06-10 12:33:25.792451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.202 [2024-06-10 12:33:25.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.202 [2024-06-10 12:33:25.793052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.202 [2024-06-10 12:33:25.793060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.202 [2024-06-10 12:33:25.793286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.202 [2024-06-10 12:33:25.793507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.202 [2024-06-10 12:33:25.793518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.202 [2024-06-10 12:33:25.793525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.202 [2024-06-10 12:33:25.797078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.806315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.806887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.806903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.806911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.807130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.807355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.807365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.807372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.810925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.820154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.820736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.820752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.820759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.820983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.821210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.821219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.821226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.824777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.834006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.834548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.834564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.834572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.834791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.835011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.835020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.835027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.838588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.847814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.848354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.848371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.848379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.848599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.848819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.848828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.848835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.852437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.861682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.862312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.862350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.862362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.862604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.862828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.862837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.862850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.866416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.875650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.876224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.876244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.876251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.876472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.876691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.464 [2024-06-10 12:33:25.876701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.464 [2024-06-10 12:33:25.876709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.464 [2024-06-10 12:33:25.880269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.464 [2024-06-10 12:33:25.889497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.464 [2024-06-10 12:33:25.890033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.464 [2024-06-10 12:33:25.890049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.464 [2024-06-10 12:33:25.890057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.464 [2024-06-10 12:33:25.890283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.464 [2024-06-10 12:33:25.890503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.890512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.890520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.894072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.903303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.903813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.903829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.903836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.904055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.904282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.904293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.904300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.907855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.917297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.917882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.917898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.917905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.918124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.918352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.918364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.918370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.921923] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.931159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.931832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.931870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.931881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.932120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.932351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.932361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.932369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.935927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.945154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.945803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.945841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.945853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.946094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.946324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.946335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.946342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.949898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.959135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.959726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.959745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.959753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.959973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.960203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.960212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.960219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.963769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.972987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.973562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.973579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.973586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.973805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.974025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.974034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.974041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.977596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:25.986847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:25.987312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:25.987350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:25.987362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:25.987605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:25.987828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:25.987838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:25.987846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:25.991413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:26.000841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:26.001606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:26.001644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:26.001656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:26.001897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:26.002120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:26.002130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:26.002137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:26.005704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:26.014720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:26.015321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:26.015360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:26.015372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:26.015614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:26.015838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:26.015848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:26.015855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:26.019420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.465 [2024-06-10 12:33:26.028637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.465 [2024-06-10 12:33:26.029323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.465 [2024-06-10 12:33:26.029362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.465 [2024-06-10 12:33:26.029374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.465 [2024-06-10 12:33:26.029616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.465 [2024-06-10 12:33:26.029840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.465 [2024-06-10 12:33:26.029850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.465 [2024-06-10 12:33:26.029857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.465 [2024-06-10 12:33:26.033421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.466 [2024-06-10 12:33:26.042641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.466 [2024-06-10 12:33:26.043317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.466 [2024-06-10 12:33:26.043355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.466 [2024-06-10 12:33:26.043367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.466 [2024-06-10 12:33:26.043607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.466 [2024-06-10 12:33:26.043831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.466 [2024-06-10 12:33:26.043841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.466 [2024-06-10 12:33:26.043848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.466 [2024-06-10 12:33:26.047414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.466 [2024-06-10 12:33:26.056643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.466 [2024-06-10 12:33:26.057308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.466 [2024-06-10 12:33:26.057351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.466 [2024-06-10 12:33:26.057363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.466 [2024-06-10 12:33:26.057603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.466 [2024-06-10 12:33:26.057827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.466 [2024-06-10 12:33:26.057837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.466 [2024-06-10 12:33:26.057844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.466 [2024-06-10 12:33:26.061407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.070629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.071214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.071233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.071241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.071462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.071682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.071691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.071698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.075252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.084470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.084993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.085009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.085016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.085240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.085461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.085470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.085477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.089024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.098447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.098987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.099002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.099009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.099233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.099458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.099467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.099474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.103022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.112236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.112808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.112823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.112830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.113049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.113275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.113284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.113291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.116841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.126056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.126591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.126606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.126614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.126833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.127054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.127063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.127070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.130623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.728 [2024-06-10 12:33:26.140045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.728 [2024-06-10 12:33:26.140585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.728 [2024-06-10 12:33:26.140601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.728 [2024-06-10 12:33:26.140608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.728 [2024-06-10 12:33:26.140827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.728 [2024-06-10 12:33:26.141048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.728 [2024-06-10 12:33:26.141057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.728 [2024-06-10 12:33:26.141064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.728 [2024-06-10 12:33:26.144616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.154043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.154602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.154618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.154625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.154844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.155065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.155073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.155080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.158631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.167840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.168385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.168401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.168408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.168627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.168848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.168856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.168863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.172417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.181841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.182505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.182543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.182554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.182793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.183017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.183027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.183035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.186630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.195651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.196217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.196237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.196249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.196470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.196690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.196698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.196705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.200260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.209481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.210148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.210186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.210206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.210446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.210671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.210682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.210689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.214249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.223470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.224124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.224163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.224176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.224424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.224650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.224660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.224667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.228225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.237444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.238020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.238038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.238046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.238272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.238493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.238507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.238514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.242066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.251369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.252056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.252094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.252107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.252357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.252582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.252592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.252599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.256170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.265186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.265885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.265922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.265934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.266175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.266406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.266417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.266424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.269982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.278991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.279671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.729 [2024-06-10 12:33:26.279709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.729 [2024-06-10 12:33:26.279720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.729 [2024-06-10 12:33:26.279959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.729 [2024-06-10 12:33:26.280183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.729 [2024-06-10 12:33:26.280192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.729 [2024-06-10 12:33:26.280208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.729 [2024-06-10 12:33:26.283764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.729 [2024-06-10 12:33:26.292991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.729 [2024-06-10 12:33:26.293676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.730 [2024-06-10 12:33:26.293715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.730 [2024-06-10 12:33:26.293725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.730 [2024-06-10 12:33:26.293964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.730 [2024-06-10 12:33:26.294188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.730 [2024-06-10 12:33:26.294205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.730 [2024-06-10 12:33:26.294214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.730 [2024-06-10 12:33:26.297771] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.730 [2024-06-10 12:33:26.306995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.730 [2024-06-10 12:33:26.307686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.730 [2024-06-10 12:33:26.307724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.730 [2024-06-10 12:33:26.307735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.730 [2024-06-10 12:33:26.307974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.730 [2024-06-10 12:33:26.308205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.730 [2024-06-10 12:33:26.308215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.730 [2024-06-10 12:33:26.308223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.730 [2024-06-10 12:33:26.311780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.730 [2024-06-10 12:33:26.320789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.730 [2024-06-10 12:33:26.321378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.730 [2024-06-10 12:33:26.321398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.730 [2024-06-10 12:33:26.321406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.730 [2024-06-10 12:33:26.321627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.730 [2024-06-10 12:33:26.321847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.730 [2024-06-10 12:33:26.321855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.730 [2024-06-10 12:33:26.321862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.730 [2024-06-10 12:33:26.325413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.334630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.335183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.335205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.335213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.335438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.335658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.335667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.335674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.339225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.348445] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.348982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.348998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.349006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.349231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.349451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.349461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.349468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.353017] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.362276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.362924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.362962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.362972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.363221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.363446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.363456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.363463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.367020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.376239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.376829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.376848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.376856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.377076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.377303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.377313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.377324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.380876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.390092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.390747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.390785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.390796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.391034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.391268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.391278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.391286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.394874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.403891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.404535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.404573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.404584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.404823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.405047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.405057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.405064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.408631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.417844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.418534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.418571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.418582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.418821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.419044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.419054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.419062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.422627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.431844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.432505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.432547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.432558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.432797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.433021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.433030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.433038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.436602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.445819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.446476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-06-10 12:33:26.446514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.992 [2024-06-10 12:33:26.446525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.992 [2024-06-10 12:33:26.446764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.992 [2024-06-10 12:33:26.446988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.992 [2024-06-10 12:33:26.446997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.992 [2024-06-10 12:33:26.447005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.992 [2024-06-10 12:33:26.450569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.992 [2024-06-10 12:33:26.459798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.992 [2024-06-10 12:33:26.460479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.460516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.460527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.460765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.460989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.460998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.461006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.464572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.473790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.474362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.474400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.474412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.474653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.474881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.474891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.474898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.478463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.487684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.488310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.488347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.488359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.488600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.488823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.488832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.488840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.492405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.501628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.502310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.502348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.502359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.502597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.502821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.502830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.502838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.506402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.515619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.516282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.516320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.516330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.516570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.516794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.516804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.516811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.520381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.529601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.530157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.530176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.530184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.530410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.530632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.530640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.530647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.534197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.543406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.544074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.544111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.544122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.544370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.544594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.544604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.544612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.548168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.557436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.558004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.558040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.558051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.558303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.558529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.558538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.558546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.562103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.571321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.571992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.572029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.572044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.572293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.572518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.572527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.572535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.576088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.993 [2024-06-10 12:33:26.585308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.993 [2024-06-10 12:33:26.585857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-06-10 12:33:26.585876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:20.993 [2024-06-10 12:33:26.585884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:20.993 [2024-06-10 12:33:26.586104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:20.993 [2024-06-10 12:33:26.586333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.993 [2024-06-10 12:33:26.586344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.993 [2024-06-10 12:33:26.586351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.993 [2024-06-10 12:33:26.589900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.255 [2024-06-10 12:33:26.599112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.255 [2024-06-10 12:33:26.599695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.255 [2024-06-10 12:33:26.599711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.255 [2024-06-10 12:33:26.599718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.255 [2024-06-10 12:33:26.599937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.255 [2024-06-10 12:33:26.600156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.255 [2024-06-10 12:33:26.600166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.255 [2024-06-10 12:33:26.600173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.255 [2024-06-10 12:33:26.603758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.255 [2024-06-10 12:33:26.612974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.255 [2024-06-10 12:33:26.613651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.255 [2024-06-10 12:33:26.613688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.255 [2024-06-10 12:33:26.613699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.255 [2024-06-10 12:33:26.613938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.255 [2024-06-10 12:33:26.614162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.255 [2024-06-10 12:33:26.614181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.255 [2024-06-10 12:33:26.614189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.255 [2024-06-10 12:33:26.617754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.255 [2024-06-10 12:33:26.626975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.255 [2024-06-10 12:33:26.627576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.255 [2024-06-10 12:33:26.627613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.255 [2024-06-10 12:33:26.627624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.255 [2024-06-10 12:33:26.627864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.255 [2024-06-10 12:33:26.628088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.255 [2024-06-10 12:33:26.628097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.255 [2024-06-10 12:33:26.628104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.255 [2024-06-10 12:33:26.631667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.255 [2024-06-10 12:33:26.640886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.255 [2024-06-10 12:33:26.641562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.255 [2024-06-10 12:33:26.641600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.255 [2024-06-10 12:33:26.641611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.255 [2024-06-10 12:33:26.641851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.255 [2024-06-10 12:33:26.642075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.255 [2024-06-10 12:33:26.642085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.255 [2024-06-10 12:33:26.642092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.255 [2024-06-10 12:33:26.645656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.255 [2024-06-10 12:33:26.654884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.255 [2024-06-10 12:33:26.655560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.255 [2024-06-10 12:33:26.655598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.255 [2024-06-10 12:33:26.655609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.255 [2024-06-10 12:33:26.655847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.255 [2024-06-10 12:33:26.656071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.255 [2024-06-10 12:33:26.656080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.255 [2024-06-10 12:33:26.656088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.659652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.668876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.669514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.669552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.669562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.669801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.670025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.670034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.670042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.673606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.682832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.683374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.683393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.683401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.683621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.683841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.683850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.683857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.687409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.696829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.697437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.697475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.697486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.697725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.697949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.697958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.697966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.701529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.710752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.711327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.711364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.711381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.711621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.711845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.711854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.711862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.715426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.724645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.725320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.725357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.725368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.725607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.725831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.725840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.725848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.729411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.738630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.739275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.739313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.739324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.739563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.739786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.739796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.739804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.743367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.752595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.753275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.753314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.753326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.753566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.753790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.753803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.753811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.757387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.766399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.766981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.767000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.767008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.767234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.767455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.767464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.767471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.771019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.780229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.780809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.780825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.780832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.781051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.781278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.781287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.781294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.784840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.794048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.794627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.256 [2024-06-10 12:33:26.794642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.256 [2024-06-10 12:33:26.794649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.256 [2024-06-10 12:33:26.794868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.256 [2024-06-10 12:33:26.795088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.256 [2024-06-10 12:33:26.795096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.256 [2024-06-10 12:33:26.795103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.256 [2024-06-10 12:33:26.798657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.256 [2024-06-10 12:33:26.807863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.256 [2024-06-10 12:33:26.808379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.257 [2024-06-10 12:33:26.808395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.257 [2024-06-10 12:33:26.808403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.257 [2024-06-10 12:33:26.808622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.257 [2024-06-10 12:33:26.808842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.257 [2024-06-10 12:33:26.808850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.257 [2024-06-10 12:33:26.808857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.257 [2024-06-10 12:33:26.812441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.257 [2024-06-10 12:33:26.821655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.257 [2024-06-10 12:33:26.822217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.257 [2024-06-10 12:33:26.822254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.257 [2024-06-10 12:33:26.822265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.257 [2024-06-10 12:33:26.822503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.257 [2024-06-10 12:33:26.822727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.257 [2024-06-10 12:33:26.822736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.257 [2024-06-10 12:33:26.822744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.257 [2024-06-10 12:33:26.826309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.257 [2024-06-10 12:33:26.835531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.257 [2024-06-10 12:33:26.836222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.257 [2024-06-10 12:33:26.836260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.257 [2024-06-10 12:33:26.836272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.257 [2024-06-10 12:33:26.836512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.257 [2024-06-10 12:33:26.836736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.257 [2024-06-10 12:33:26.836746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.257 [2024-06-10 12:33:26.836753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.257 [2024-06-10 12:33:26.840314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.257 [2024-06-10 12:33:26.849534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.257 [2024-06-10 12:33:26.850201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.257 [2024-06-10 12:33:26.850239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.257 [2024-06-10 12:33:26.850251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.257 [2024-06-10 12:33:26.850495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.257 [2024-06-10 12:33:26.850719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.257 [2024-06-10 12:33:26.850729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.257 [2024-06-10 12:33:26.850736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.257 [2024-06-10 12:33:26.854304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.521 [2024-06-10 12:33:26.863528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.521 [2024-06-10 12:33:26.864213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-06-10 12:33:26.864251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.521 [2024-06-10 12:33:26.864263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.521 [2024-06-10 12:33:26.864503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.521 [2024-06-10 12:33:26.864727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.521 [2024-06-10 12:33:26.864737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.864745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.868309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.877526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.878221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.878259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.878271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.878511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.878735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.878744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.878752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.882318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.891335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.891992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.892029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.892040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.892288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.892513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.892522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.892535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.896090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.905306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.905895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.905914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.905922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.906143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.906369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.906380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.906387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.909935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.919147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.919729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.919744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.919752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.919970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.920191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.920206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.920213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.923760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.932966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.933553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.933568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.933576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.933795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.934015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.934024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.934031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.937585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.946794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.947427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.947469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.947480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.947719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.947943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.947953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.947960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.951525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.960753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.961322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.961359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.961372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.961613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.961837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.961847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.961854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.965421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.974640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.975296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.975334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.975346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.975586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.975810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.975820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.975827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.979394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:26.988615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:26.989147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:26.989166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:26.989174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:26.989399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:26.989625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:26.989634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:26.989641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.522 [2024-06-10 12:33:26.993203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.522 [2024-06-10 12:33:27.002621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.522 [2024-06-10 12:33:27.003293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-06-10 12:33:27.003331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.522 [2024-06-10 12:33:27.003341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.522 [2024-06-10 12:33:27.003581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.522 [2024-06-10 12:33:27.003804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.522 [2024-06-10 12:33:27.003814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.522 [2024-06-10 12:33:27.003821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.007383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.016605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.017188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.017212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.017220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.017440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.017660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.017669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.017676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.021259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.030471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.031027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.031065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.031077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.031326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.031550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.031561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.031568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.035129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.044349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.045016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.045054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.045064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.045312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.045537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.045546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.045554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.049109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.058338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.059019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.059057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.059067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.059316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.059540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.059550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.059557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.063113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.072333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.073012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.073050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.073061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.073309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.073534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.073543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.073551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.077106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.086332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.086890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.086910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.086922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.087142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.087370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.087379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.087386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.090934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.100153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.100699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.100715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.100723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.100942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.101162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.101171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.101178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.104730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.523 [2024-06-10 12:33:27.114148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.523 [2024-06-10 12:33:27.114730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-06-10 12:33:27.114745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.523 [2024-06-10 12:33:27.114753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.523 [2024-06-10 12:33:27.114971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.523 [2024-06-10 12:33:27.115191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.523 [2024-06-10 12:33:27.115207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.523 [2024-06-10 12:33:27.115214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.523 [2024-06-10 12:33:27.118761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.127975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.128563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.128579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.128586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.128805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.129025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.129039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.129046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.132638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.141855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.142519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.142557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.142568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.142807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.143032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.143042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.143049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.146612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.155842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.156536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.156574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.156585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.156824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.157047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.157057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.157065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.160631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.169639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.170319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.170331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.170570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.170794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.170803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.170811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.174376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.183596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.184182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.184205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.184214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.184434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.184654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.184663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.184670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.188223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.197444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.198013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.198029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.198037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.198263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.198484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.198492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.838 [2024-06-10 12:33:27.198499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.838 [2024-06-10 12:33:27.202047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.838 [2024-06-10 12:33:27.211260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.838 [2024-06-10 12:33:27.211817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.838 [2024-06-10 12:33:27.211833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.838 [2024-06-10 12:33:27.211840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.838 [2024-06-10 12:33:27.212060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.838 [2024-06-10 12:33:27.212286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.838 [2024-06-10 12:33:27.212295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.212302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.215848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.225059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.225640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.225655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.225663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.225886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.226105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.226114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.226122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.229705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.238921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.239573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.239611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.239621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.239860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.240084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.240093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.240101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.243665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.252883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.253560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.253598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.253608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.253847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.254071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.254080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.254088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.257664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.266885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.267549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.267586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.267597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.267835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.268059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.268069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.268080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.271646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.280940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.281651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.281689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.281701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.281941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.282165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.282174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.282182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.285746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.294753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.295457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.295495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.295505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.295744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.295968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.295978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.295985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.299551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.308571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.309164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.309184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.309191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.309417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.309637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.309646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.309653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.313206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.322421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.322975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.322993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.323001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.323225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.323446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.323455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.323462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.327014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.336230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.336838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.336876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.336887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.337125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.337358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.337368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.839 [2024-06-10 12:33:27.337376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.839 [2024-06-10 12:33:27.340932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.839 [2024-06-10 12:33:27.350153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.839 [2024-06-10 12:33:27.350748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.839 [2024-06-10 12:33:27.350767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.839 [2024-06-10 12:33:27.350775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.839 [2024-06-10 12:33:27.350994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.839 [2024-06-10 12:33:27.351219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.839 [2024-06-10 12:33:27.351228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.351236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.840 [2024-06-10 12:33:27.354796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.840 [2024-06-10 12:33:27.364008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.840 [2024-06-10 12:33:27.364652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.840 [2024-06-10 12:33:27.364670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.840 [2024-06-10 12:33:27.364678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.840 [2024-06-10 12:33:27.364901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.840 [2024-06-10 12:33:27.365121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.840 [2024-06-10 12:33:27.365130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.365137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.840 [2024-06-10 12:33:27.368690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.840 [2024-06-10 12:33:27.377904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.840 [2024-06-10 12:33:27.378405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.840 [2024-06-10 12:33:27.378443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.840 [2024-06-10 12:33:27.378455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.840 [2024-06-10 12:33:27.378695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.840 [2024-06-10 12:33:27.378919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.840 [2024-06-10 12:33:27.378928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.378936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.840 [2024-06-10 12:33:27.382499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.840 [2024-06-10 12:33:27.391751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.840 [2024-06-10 12:33:27.392453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.840 [2024-06-10 12:33:27.392491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.840 [2024-06-10 12:33:27.392502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.840 [2024-06-10 12:33:27.392742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.840 [2024-06-10 12:33:27.392966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.840 [2024-06-10 12:33:27.392975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.392983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.840 [2024-06-10 12:33:27.396544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.840 [2024-06-10 12:33:27.405555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.840 [2024-06-10 12:33:27.406228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.840 [2024-06-10 12:33:27.406266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.840 [2024-06-10 12:33:27.406277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.840 [2024-06-10 12:33:27.406515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.840 [2024-06-10 12:33:27.406739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.840 [2024-06-10 12:33:27.406749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.406761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.840 [2024-06-10 12:33:27.410327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.840 [2024-06-10 12:33:27.419549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.840 [2024-06-10 12:33:27.420231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.840 [2024-06-10 12:33:27.420269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:21.840 [2024-06-10 12:33:27.420281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:21.840 [2024-06-10 12:33:27.420523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:21.840 [2024-06-10 12:33:27.420746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.840 [2024-06-10 12:33:27.420756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.840 [2024-06-10 12:33:27.420764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.102 [2024-06-10 12:33:27.424329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.102 [2024-06-10 12:33:27.433550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.102 [2024-06-10 12:33:27.434078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.102 [2024-06-10 12:33:27.434097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.102 [2024-06-10 12:33:27.434105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.102 [2024-06-10 12:33:27.434358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.434581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.434590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.434597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.438169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.447397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.448074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.448111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.448122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.448369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.448594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.448603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.448611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.452167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.461208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.461667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.461690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.461698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.461919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.462139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.462149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.462157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.465716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.475148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.475768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.475784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.475792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.476011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.476236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.476245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.476252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.479805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.489030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.489619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.489634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.489642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.489861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.490080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.490089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.490096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.493650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.502870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.503414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.503430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.503437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.503657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.503881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.503890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.503897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.507451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.516671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.517186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.517206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.517214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.517433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.517653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.517662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.517669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.521218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.530644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.531176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.531191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.531203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.531422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.531643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.531651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.531658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.535211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.544642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.545179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.545198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.545206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.545425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.545645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.545652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.545659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.549216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.558449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.558871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.558890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.558898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.559119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.559345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.103 [2024-06-10 12:33:27.559355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.103 [2024-06-10 12:33:27.559362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.103 [2024-06-10 12:33:27.562915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.103 [2024-06-10 12:33:27.572346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.103 [2024-06-10 12:33:27.572884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.103 [2024-06-10 12:33:27.572900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.103 [2024-06-10 12:33:27.572907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.103 [2024-06-10 12:33:27.573126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.103 [2024-06-10 12:33:27.573353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.573363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.573370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.576917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.586249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.586926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.586964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.586974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.587223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.587448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.587459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.587466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.591023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.600261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.600848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.600867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.600880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.601100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.601327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.601337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.601345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.604897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.614125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.614702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.614720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.614727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.614946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.615166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.615175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.615182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.618741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.627966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.628629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.628667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.628678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.628917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.629142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.629151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.629159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.632730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.641995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.642572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.642591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.642598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.642819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.643039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.643053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.643060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.646625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.655861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.656414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.656430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.656438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.656657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.656878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.656887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.656894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.660452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.669678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.670303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.670342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.670354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.670594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.670818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.670827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.670835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.674400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.683627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.684270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.684308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.684320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.684561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.684784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.684794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.684801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.688363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.104 [2024-06-10 12:33:27.697587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.104 [2024-06-10 12:33:27.698158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.104 [2024-06-10 12:33:27.698178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.104 [2024-06-10 12:33:27.698185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.104 [2024-06-10 12:33:27.698419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.104 [2024-06-10 12:33:27.698643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.104 [2024-06-10 12:33:27.698652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.104 [2024-06-10 12:33:27.698659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.104 [2024-06-10 12:33:27.702215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.711446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.711955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.711973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.711980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.712206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.712427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.712435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.712443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.715992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.725418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.726048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.726086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.726097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.726344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.726569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.726580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.726587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.730143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.739364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.739917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.739936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.739944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.740170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.740395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.740405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.740412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.743965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.753182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.753864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.753901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.753912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.754151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.754391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.754401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.754409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.757965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.767182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.767775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.767795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.767802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.768023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.768249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.768259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.768266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.771820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.781039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.781614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.781630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.781638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.781857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.782077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.782086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.782097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.785658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.794881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.795378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.795416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.795427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.795666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.795890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.795900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.795907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.799471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.808688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.809453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.809491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.809502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.809741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.809965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.809975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.809983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.813545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.367 [2024-06-10 12:33:27.822555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.367 [2024-06-10 12:33:27.823102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.367 [2024-06-10 12:33:27.823122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.367 [2024-06-10 12:33:27.823130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.367 [2024-06-10 12:33:27.823357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.367 [2024-06-10 12:33:27.823579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.367 [2024-06-10 12:33:27.823589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.367 [2024-06-10 12:33:27.823597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.367 [2024-06-10 12:33:27.827147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.836419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.836863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.836881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.836888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.837108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.837335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.837346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.837353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.840910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.850383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.850919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.850935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.850943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.851163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.851390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.851399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.851406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.854969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.864205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.864838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.864876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.864887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.865126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.865361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.865372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.865380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.868939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.878174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.878723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.878742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.878750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.878971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.879202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.879212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.879219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.882775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.892000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.892662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.892700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.892711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.892950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.893174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.893184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.893191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.896761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.905995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.906557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.906576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.906584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.906804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.907024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.907034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.907041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.910601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.919829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.920472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.920511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.920521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.920760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.920984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.920994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.921001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.924567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.933801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.934488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.934527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.934538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.934776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.935001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.935010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.935018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.938583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.947802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.948472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.948511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.948521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.948760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.948984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.948994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.949001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.952563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.368 [2024-06-10 12:33:27.961803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.368 [2024-06-10 12:33:27.962388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.368 [2024-06-10 12:33:27.962426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.368 [2024-06-10 12:33:27.962437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.368 [2024-06-10 12:33:27.962676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.368 [2024-06-10 12:33:27.962900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.368 [2024-06-10 12:33:27.962909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.368 [2024-06-10 12:33:27.962917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.368 [2024-06-10 12:33:27.966480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:27.975702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:27.976290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:27.976315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:27.976323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:27.976543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:27.976764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:27.976773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:27.976779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:27.980334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:27.989554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:27.990131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:27.990146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:27.990153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:27.990378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:27.990598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:27.990607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:27.990614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:27.994160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.003376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.004022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.004062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.004072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.004319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.004544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.004554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.004561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.008116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.017339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.018015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.018052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.018063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.018310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.018543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.018553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.018561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.022117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.031341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.032024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.032062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.032072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.032319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.032544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.032553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.032561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.036115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.045341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.045922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.045941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.045949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.046168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.046395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.046405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.046412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.049961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.059213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.059785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.059802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.059809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.060028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.060253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.060262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.060269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.063821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.073040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.073706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.073744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.073755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.073993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.074225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.074236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.074244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.077799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.087019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.087661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.087699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.632 [2024-06-10 12:33:28.087709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.632 [2024-06-10 12:33:28.087948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.632 [2024-06-10 12:33:28.088172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.632 [2024-06-10 12:33:28.088181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.632 [2024-06-10 12:33:28.088190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.632 [2024-06-10 12:33:28.091752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.632 [2024-06-10 12:33:28.100968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.632 [2024-06-10 12:33:28.101527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.632 [2024-06-10 12:33:28.101546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.101555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.101775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.101995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.102005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.102012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.105566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.114774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.115415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.115453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.115468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.115708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.115931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.115941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.115948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.119514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.128729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.129431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.129469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.129480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.129719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.129943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.129953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.129960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.133526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.142533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.143218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.143256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.143268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.143508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.143732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.143742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.143750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.147310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.156535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.157129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.157167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.157178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.157427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.157654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.157667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.157675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.161239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.170455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.171138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.171175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.171187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.171437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.171662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.171671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.171679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.175233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.184448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.185092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.185130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.185141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.185390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.185614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.185623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.185631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.189183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 [2024-06-10 12:33:28.198398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 [2024-06-10 12:33:28.199087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.199124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.199135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 [2024-06-10 12:33:28.199385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 [2024-06-10 12:33:28.199610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.199621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.199628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.203181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 844909 Killed "${NVMF_APP[@]}" "$@" 00:29:22.633 [2024-06-10 12:33:28.212188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.633 [2024-06-10 12:33:28.212877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.633 [2024-06-10 12:33:28.212916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.633 [2024-06-10 12:33:28.212926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:22.633 [2024-06-10 12:33:28.213165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.633 [2024-06-10 12:33:28.213399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.633 [2024-06-10 12:33:28.213409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.633 [2024-06-10 12:33:28.213417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.633 [2024-06-10 12:33:28.216975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=846612 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 846612 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 846612 ']' 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:22.633 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.634 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:22.634 12:33:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.634 [2024-06-10 12:33:28.225996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.634 [2024-06-10 12:33:28.226555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.634 [2024-06-10 12:33:28.226591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.634 [2024-06-10 12:33:28.226603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.634 [2024-06-10 12:33:28.226843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.634 [2024-06-10 12:33:28.227069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.634 [2024-06-10 12:33:28.227079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.634 [2024-06-10 12:33:28.227087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.634 [2024-06-10 12:33:28.230662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.897 [2024-06-10 12:33:28.239894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.897 [2024-06-10 12:33:28.240456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.897 [2024-06-10 12:33:28.240475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.897 [2024-06-10 12:33:28.240483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.897 [2024-06-10 12:33:28.240703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.897 [2024-06-10 12:33:28.240923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.897 [2024-06-10 12:33:28.240933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.897 [2024-06-10 12:33:28.240940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.897 [2024-06-10 12:33:28.244503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.897 [2024-06-10 12:33:28.253728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.897 [2024-06-10 12:33:28.254310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.897 [2024-06-10 12:33:28.254326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.897 [2024-06-10 12:33:28.254334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.897 [2024-06-10 12:33:28.254553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.897 [2024-06-10 12:33:28.254774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.254783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.254791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.258357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.267613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.268187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.268210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.268218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.268439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.268659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.268668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.268675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.271850] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:22.898 [2024-06-10 12:33:28.271895] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.898 [2024-06-10 12:33:28.272229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.281448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.282017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.282037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.282045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.282271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.282492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.282501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.282510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.286061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.295288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.295960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.295998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.296009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.296256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.296481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.296490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.296497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.300052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.898 [2024-06-10 12:33:28.309347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.310045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.310083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.310095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.310342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.310567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.310577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.310584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.314139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.323151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.323712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.323732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.323740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.323962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.324187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.324202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.324211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.327763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.336987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.337565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.337581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.337589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.337808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.338028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.338036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.338044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.341601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.350816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.351354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.351370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.351378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.351597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.351818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.351827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.351834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.355396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.358600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.898 [2024-06-10 12:33:28.364614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.365199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.365216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.365224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.365443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.365663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.365672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.365682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.369236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.898 [2024-06-10 12:33:28.378453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.898 [2024-06-10 12:33:28.378859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.898 [2024-06-10 12:33:28.378876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.898 [2024-06-10 12:33:28.378884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.898 [2024-06-10 12:33:28.379103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.898 [2024-06-10 12:33:28.379329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.898 [2024-06-10 12:33:28.379339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.898 [2024-06-10 12:33:28.379346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.898 [2024-06-10 12:33:28.382892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.392328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.392983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.393023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.393034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.393283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.393507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.393517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.393525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.397082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.406301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.406971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.407010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.407021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.407271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.407496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.407506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.407513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.411068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.411972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.899 [2024-06-10 12:33:28.411997] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.899 [2024-06-10 12:33:28.412006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.899 [2024-06-10 12:33:28.412010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.899 [2024-06-10 12:33:28.412014] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.899 [2024-06-10 12:33:28.412111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.899 [2024-06-10 12:33:28.412271] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.899 [2024-06-10 12:33:28.412412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.899 [2024-06-10 12:33:28.420297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.421020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.421059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.421070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.421319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.421545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.421554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.421562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.425116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.434123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.434846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.434886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.434896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.435138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.435370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.435381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.435388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.438943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.447953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.448537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.448556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.448564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.448784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.449004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.449014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.449027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.452580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.461799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.462382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.462421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.462433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.462675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.462899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.462908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.462916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.466479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.475749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.476337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.476375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.476386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.476625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.476849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.476860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.476868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.899 [2024-06-10 12:33:28.480434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.899 [2024-06-10 12:33:28.489657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.899 [2024-06-10 12:33:28.490296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.899 [2024-06-10 12:33:28.490334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:22.899 [2024-06-10 12:33:28.490346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:22.899 [2024-06-10 12:33:28.490589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:22.899 [2024-06-10 12:33:28.490813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.899 [2024-06-10 12:33:28.490822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.899 [2024-06-10 12:33:28.490830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.900 [2024-06-10 12:33:28.494394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.162 [2024-06-10 12:33:28.503618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.162 [2024-06-10 12:33:28.504221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.162 [2024-06-10 12:33:28.504245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.162 [2024-06-10 12:33:28.504253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.162 [2024-06-10 12:33:28.504473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.162 [2024-06-10 12:33:28.504693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.162 [2024-06-10 12:33:28.504703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.162 [2024-06-10 12:33:28.504711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.162 [2024-06-10 12:33:28.508267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.162 [2024-06-10 12:33:28.517485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.162 [2024-06-10 12:33:28.518015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.162 [2024-06-10 12:33:28.518052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.162 [2024-06-10 12:33:28.518063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.162 [2024-06-10 12:33:28.518311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.162 [2024-06-10 12:33:28.518535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.162 [2024-06-10 12:33:28.518545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.162 [2024-06-10 12:33:28.518553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.162 [2024-06-10 12:33:28.522108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.162 [2024-06-10 12:33:28.531329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.162 [2024-06-10 12:33:28.532007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.162 [2024-06-10 12:33:28.532045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.162 [2024-06-10 12:33:28.532056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.162 [2024-06-10 12:33:28.532304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.162 [2024-06-10 12:33:28.532528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.162 [2024-06-10 12:33:28.532538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.532545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.536099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.545319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.545914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.545950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.545961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.546208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.546437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.546447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.546454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.550009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.559239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.559916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.559954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.559965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.560213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.560437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.560447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.560454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.564009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.573232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.573896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.573933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.573944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.574183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.574416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.574426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.574434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.577988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.587213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.587873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.587911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.587923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.588162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.588395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.588406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.588414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.591972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.601192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.601838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.601876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.601886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.602125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.602356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.602367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.602375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.605933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.615153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.615839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.615877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.615888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.616126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.616358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.616368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.616376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.619929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.629148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.629701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.629739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.629751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.629991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.630224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.630235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.630242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.633798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.643017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.643710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.643748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.643763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.644002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.644234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.644244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.644252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.647806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.656827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.657478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.657516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.657526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.657766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.657990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.658000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.658007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.661570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.670789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.163 [2024-06-10 12:33:28.671473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.163 [2024-06-10 12:33:28.671511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.163 [2024-06-10 12:33:28.671522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.163 [2024-06-10 12:33:28.671761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.163 [2024-06-10 12:33:28.671985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.163 [2024-06-10 12:33:28.671994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.163 [2024-06-10 12:33:28.672002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.163 [2024-06-10 12:33:28.675563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.163 [2024-06-10 12:33:28.684813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.685337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.685373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.685385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.685624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.685848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.685860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.685867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.689430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.164 [2024-06-10 12:33:28.698649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.699300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.699337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.699349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.699590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.699814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.699824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.699832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.703397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.164 [2024-06-10 12:33:28.712612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.713212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.713232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.713240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.713461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.713681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.713690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.713697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.717248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.164 [2024-06-10 12:33:28.726459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.727045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.727061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.727068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.727292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.727513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.727522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.727530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.731115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.164 [2024-06-10 12:33:28.740337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.740631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.740646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.740653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.740872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.741092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.741102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.741109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.744661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.164 [2024-06-10 12:33:28.754303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.164 [2024-06-10 12:33:28.755012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.164 [2024-06-10 12:33:28.755050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.164 [2024-06-10 12:33:28.755061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.164 [2024-06-10 12:33:28.755316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.164 [2024-06-10 12:33:28.755540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.164 [2024-06-10 12:33:28.755550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.164 [2024-06-10 12:33:28.755557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.164 [2024-06-10 12:33:28.759113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.768123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.768727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.768746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.768754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.768975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.769226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.769236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.769243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.772794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.782007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.782654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.782693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.782703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.782950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.783174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.783184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.783191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.786758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.795979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.796698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.796736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.796747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.796986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.797218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.797228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.797236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.800790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.809802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.810297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.810335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.810348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.810589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.810812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.810822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.810830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.814393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.823613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.824265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.824304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.824315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.824554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.824778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.824788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.824800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.828364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.837586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.838287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.838325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.838338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.838580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.838804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.838813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.838821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.842385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.851394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.852023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.852061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.426 [2024-06-10 12:33:28.852071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.426 [2024-06-10 12:33:28.852319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.426 [2024-06-10 12:33:28.852544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.426 [2024-06-10 12:33:28.852553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.426 [2024-06-10 12:33:28.852561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.426 [2024-06-10 12:33:28.856125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.426 [2024-06-10 12:33:28.865349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.426 [2024-06-10 12:33:28.865960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.426 [2024-06-10 12:33:28.865979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.865987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.866212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.866433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.866442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.866449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.869998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.879217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.879884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.879922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.879933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.880172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.880405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.880416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.880423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.883978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.893239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.893863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.893901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.893911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.894150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.894386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.894397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.894405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.897962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.907181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.907752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.907789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.907801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.908042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.908274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.908285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.908293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.911846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.921068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.921761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.921799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.921809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.922052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.922284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.922295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.922302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.925857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.934867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.935534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.935572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.935583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.935822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.936045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.936055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.936063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.939624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.948846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.949553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.949591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.949602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.949840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.950064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.950074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.950081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.953644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.962668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.963300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.963339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.963349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.963588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.963812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.963822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.963834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.967400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.976622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.977274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.977311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.977324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.977566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.977790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.977801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.977809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.981372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:28.990595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:28.991309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:28.991348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:28.991360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:28.991602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.427 [2024-06-10 12:33:28.991825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.427 [2024-06-10 12:33:28.991835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.427 [2024-06-10 12:33:28.991842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.427 [2024-06-10 12:33:28.995414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.427 [2024-06-10 12:33:29.004431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.427 [2024-06-10 12:33:29.005133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.427 [2024-06-10 12:33:29.005171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.427 [2024-06-10 12:33:29.005184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.427 [2024-06-10 12:33:29.005433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.428 [2024-06-10 12:33:29.005658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.428 [2024-06-10 12:33:29.005668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.428 [2024-06-10 12:33:29.005676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.428 [2024-06-10 12:33:29.009235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.428 [2024-06-10 12:33:29.018245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.428 [2024-06-10 12:33:29.018942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.428 [2024-06-10 12:33:29.018985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.428 [2024-06-10 12:33:29.018995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.428 [2024-06-10 12:33:29.019243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.428 [2024-06-10 12:33:29.019468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.428 [2024-06-10 12:33:29.019478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.428 [2024-06-10 12:33:29.019486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.428 [2024-06-10 12:33:29.023040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 [2024-06-10 12:33:29.032046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.032480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.032499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.032508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.032728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.032948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.032958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.032965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.036519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 [2024-06-10 12:33:29.045938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.046622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.046661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.046672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.046911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.047136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.047146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.047153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.050717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 [2024-06-10 12:33:29.059742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.060450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.060492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.060503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.060742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.060966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.060976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.060983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.064549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 [2024-06-10 12:33:29.073564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.074284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.074323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.074335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.074575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.074799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.074809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.074816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.078383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 [2024-06-10 12:33:29.087399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.087944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.087982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.087993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.088240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.088465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.088475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.088482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.090101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.689 [2024-06-10 12:33:29.092035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 [2024-06-10 12:33:29.101291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.101925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.101963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.101974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.102221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.102445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.102455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.102463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.106016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 [2024-06-10 12:33:29.115281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.115983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.116022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.116033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.116279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.116505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.116514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.116522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.120080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.689 Malloc0 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.689 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 [2024-06-10 12:33:29.129097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.689 [2024-06-10 12:33:29.129711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.689 [2024-06-10 12:33:29.129731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.689 [2024-06-10 12:33:29.129739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.689 [2024-06-10 12:33:29.129959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.689 [2024-06-10 12:33:29.130180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.689 [2024-06-10 12:33:29.130188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.689 [2024-06-10 12:33:29.130201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.689 [2024-06-10 12:33:29.133759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.690 [2024-06-10 12:33:29.142976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.690 [2024-06-10 12:33:29.143670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.690 [2024-06-10 12:33:29.143708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e8130 with addr=10.0.0.2, port=4420 00:29:23.690 [2024-06-10 12:33:29.143721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e8130 is same with the state(5) to be set 00:29:23.690 [2024-06-10 12:33:29.143962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e8130 (9): Bad file descriptor 00:29:23.690 [2024-06-10 12:33:29.144186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.690 [2024-06-10 12:33:29.144206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.690 [2024-06-10 12:33:29.144214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.690 [2024-06-10 12:33:29.147769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.690 [2024-06-10 12:33:29.155589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.690 [2024-06-10 12:33:29.156795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.690 12:33:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 845282 00:29:23.690 [2024-06-10 12:33:29.236177] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.712 00:29:33.712 Latency(us) 00:29:33.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.712 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:33.712 Verification LBA range: start 0x0 length 0x4000 00:29:33.712 Nvme1n1 : 15.04 8358.23 32.65 9764.80 0.00 7022.06 774.83 42598.40 00:29:33.712 =================================================================================================================== 00:29:33.712 Total : 8358.23 32.65 9764.80 0.00 7022.06 774.83 42598.40 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:33.712 rmmod nvme_tcp 00:29:33.712 rmmod nvme_fabrics 00:29:33.712 rmmod nvme_keyring 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 846612 ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 846612 ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 846612' 00:29:33.712 killing process with pid 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 846612 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.712 12:33:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.656 12:33:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:34.656 00:29:34.656 real 0m28.672s 00:29:34.656 user 1m2.697s 00:29:34.656 sys 0m7.810s 00:29:34.656 12:33:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:34.656 12:33:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.656 ************************************ 00:29:34.656 END TEST nvmf_bdevperf 00:29:34.656 ************************************ 00:29:34.656 12:33:40 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:34.656 12:33:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:34.656 12:33:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:34.656 12:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.656 ************************************ 00:29:34.656 START TEST nvmf_target_disconnect 00:29:34.656 ************************************ 00:29:34.656 12:33:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:34.656 * Looking for test storage... 00:29:34.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.656 12:33:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.656 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:34.918 12:33:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:43.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:43.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:43.063 Found net devices under 0000:31:00.0: cvl_0_0 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:43.063 Found net devices under 0000:31:00.1: cvl_0_1 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:43.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:29:43.063 00:29:43.063 --- 10.0.0.2 ping statistics --- 00:29:43.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.063 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:29:43.063 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:29:43.063 00:29:43.063 --- 10.0.0.1 ping statistics --- 00:29:43.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.064 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.064 ************************************ 00:29:43.064 START TEST nvmf_target_disconnect_tc1 00:29:43.064 ************************************ 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:43.064 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.064 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.064 [2024-06-10 12:33:48.664094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.064 [2024-06-10 12:33:48.664144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6280 with addr=10.0.0.2, port=4420 00:29:43.064 [2024-06-10 12:33:48.664173] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:43.064 [2024-06-10 12:33:48.664184] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:43.064 [2024-06-10 12:33:48.664191] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:43.064 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:43.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:43.325 Initializing NVMe Controllers 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:43.325 00:29:43.325 real 0m0.114s 00:29:43.325 user 0m0.040s 00:29:43.325 sys 0m0.074s 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:43.325 ************************************ 00:29:43.325 END TEST nvmf_target_disconnect_tc1 00:29:43.325 ************************************ 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.325 ************************************ 00:29:43.325 START TEST nvmf_target_disconnect_tc2 00:29:43.325 ************************************ 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=853059 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 853059 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 853059 ']' 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:43.325 12:33:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.325 [2024-06-10 12:33:48.814517] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:43.325 [2024-06-10 12:33:48.814570] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.325 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.325 [2024-06-10 12:33:48.906556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.586 [2024-06-10 12:33:49.002212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.586 [2024-06-10 12:33:49.002269] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.586 [2024-06-10 12:33:49.002278] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.586 [2024-06-10 12:33:49.002285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.586 [2024-06-10 12:33:49.002291] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.587 [2024-06-10 12:33:49.002489] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:43.587 [2024-06-10 12:33:49.002651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:43.587 [2024-06-10 12:33:49.002800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.587 [2024-06-10 12:33:49.002801] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 Malloc0 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 [2024-06-10 12:33:49.685415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 [2024-06-10 12:33:49.725761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=853359 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:44.158 12:33:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.418 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.336 12:33:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 853059 00:29:46.336 12:33:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Write completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.336 starting I/O failed 00:29:46.336 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 [2024-06-10 12:33:51.766279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Write completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 Read completed with error (sct=0, sc=8) 00:29:46.337 starting I/O failed 00:29:46.337 [2024-06-10 12:33:51.766576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.337 [2024-06-10 12:33:51.767015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.767036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.767388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.767407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.767693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.767706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.768053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.768064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.768476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.768514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.768863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.768877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.769244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.769257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.769645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.769683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.770028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.770041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.770385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.770399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.770705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.770716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.770924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.770939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.771230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.771243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.771574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.771586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.771943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.771954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.772257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.772270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.772603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.772615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.772881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.772892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.773218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.773229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.773622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.773641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.773988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.774000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.774345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.337 [2024-06-10 12:33:51.774357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.337 qpair failed and we were unable to recover it. 00:29:46.337 [2024-06-10 12:33:51.774702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.774713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.775001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.775013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.775399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.775412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.775727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.775738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.776129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.776140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.778964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.779002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.779452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.779490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.779831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.779844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.780189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.780217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.780530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.780541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.780719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.780730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.781051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.781062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.781502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.781540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.781840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.781854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.782166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.782177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.782495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.782507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.782851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.782862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.783201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.783212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.783577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.783587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.783930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.783940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.784274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.784286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.784602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.784612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.784948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.784958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.785269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.785280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.785619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.785633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.785988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.785998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.786315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.786326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.786621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.786631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.786969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.786979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.787337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.787349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.787800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.787810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.788125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.788137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.788459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.788470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.788687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.788697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.788950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.788960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.789283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.789293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.789631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.789642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.789939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.338 [2024-06-10 12:33:51.789950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.338 qpair failed and we were unable to recover it. 00:29:46.338 [2024-06-10 12:33:51.790271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.790281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.790587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.790599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.790894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.790904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.791187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.791201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.791400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.791410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.791690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.791699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.792036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.792046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.792346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.792365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.792723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.792734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.793041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.793052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.793421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.793432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.793783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.793793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.793971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.793982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.794302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.794313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.794617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.794627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.794939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.794949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.795141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.795153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.795468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.795479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.795819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.795830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.796142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.796153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.796485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.796495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.796841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.796853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.797163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.797174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.797482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.797494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.797849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.797860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.798086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.798097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.798331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.798342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.798650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.798663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.798961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.798972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.799338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.799348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.799632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.799643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.800026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.800037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.800337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.800357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.800644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.800655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.800847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.800857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.801095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.801106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.801424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.339 [2024-06-10 12:33:51.801435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.339 qpair failed and we were unable to recover it. 00:29:46.339 [2024-06-10 12:33:51.801756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.801766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.802105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.802115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.802452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.802463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.802791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.803100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.803111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.803429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.803440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.803767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.803779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.804111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.804122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.804448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.804459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.804784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.804795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.805141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.805151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.805428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.805439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.805703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.805713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.806073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.806084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.806404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.806415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.806709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.806720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.807060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.807072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.807285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.807297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.807501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.807511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.807780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.807792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.808151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.808162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.808388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.808400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.808749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.808760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.809078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.809090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.809436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.809447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.809772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.809783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.809977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.809989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.810400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.810411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.810757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.810769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.811069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.811080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.811201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.811211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.811566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.811578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.811836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.811846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.812149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.812159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.812477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.812490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.812803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.340 [2024-06-10 12:33:51.812814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.340 qpair failed and we were unable to recover it. 00:29:46.340 [2024-06-10 12:33:51.813281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.813298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.813546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.813557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.813871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.813882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.814202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.814214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.814529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.814539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.814886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.814897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.815216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.815227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.815551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.815561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.815881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.815894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.816208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.816219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.816546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.816557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.816869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.816879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.817208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.817220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.817536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.817546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.817864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.817876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.818190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.818204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.818548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.818558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.818870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.818880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.819189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.819203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.819523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.819534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.819847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.819857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.820173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.820184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.820507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.820518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.820841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.820852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.821163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.821174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.821496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.821507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.821824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.821835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.822156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.822167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.822500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.822511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.822818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.822829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.341 qpair failed and we were unable to recover it. 00:29:46.341 [2024-06-10 12:33:51.823150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.341 [2024-06-10 12:33:51.823161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.823494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.823506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.823832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.823843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.824167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.824179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.824527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.824538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.824855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.824866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.825186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.825200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.825427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.825439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.825803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.825813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.826129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.826140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.826489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.826500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.826821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.826832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.827158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.827169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.827451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.827462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.827784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.827795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.828122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.828133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.828325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.828337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.828634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.828644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.828962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.828972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.829319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.829331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.829539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.829550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.829884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.829896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.830238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.830248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.830579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.830591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.830804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.830814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.831167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.831178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.831500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.831512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.831827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.831838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.832192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.832206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.832512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.832522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.832844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.832855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.833235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.833246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.833532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.833543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.833871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.833881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.834261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.834272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.834582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.834593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.834913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.834923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.835251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.835262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.835583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.342 [2024-06-10 12:33:51.835593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.342 qpair failed and we were unable to recover it. 00:29:46.342 [2024-06-10 12:33:51.835807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.835817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.836132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.836143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.836464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.836475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.836811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.836823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.837165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.837176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.837540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.837551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.837876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.838234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.838247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.838573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.838584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.838907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.838917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.839260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.839271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.839604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.839624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.839928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.839939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.840282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.840292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.840611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.840623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.840945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.840955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.841273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.841295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.841621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.841632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.841944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.841956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.842302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.842313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.842648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.842658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.843013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.843024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.843351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.843362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.843710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.843721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.844113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.844123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.844442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.844453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.844766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.844776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.845097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.845108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.845302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.845314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.845646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.845657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.845980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.845992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.846340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.846351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.846652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.846662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.846982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.846992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.847291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.847303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.847623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.847633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.847930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.847941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.848262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.848272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.848595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.343 [2024-06-10 12:33:51.848606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.343 qpair failed and we were unable to recover it. 00:29:46.343 [2024-06-10 12:33:51.848792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.848804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.849151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.849161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.849483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.849494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.849832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.849842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.850185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.850198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.850407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.850417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.850735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.850745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.851041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.851052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.851375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.851386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.851702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.851713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.852001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.852012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.852324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.852336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.852683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.852693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.853013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.853024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.853354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.853364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.853677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.853689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.854036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.854047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.854374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.854385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.854721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.854731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.855079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.855090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.855434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.855445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.855766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.855777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.856125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.856135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.856332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.856343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.856557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.856568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.856902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.856912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.857228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.857239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.857601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.857612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.857920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.857931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.858253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.858263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.858600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.858612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.858955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.858965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.859296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.859308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.859540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.859551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.859874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.859886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.860205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.860217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.860535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.860545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.860898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.860908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.344 [2024-06-10 12:33:51.861247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.344 [2024-06-10 12:33:51.861258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.344 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.861578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.861588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.861933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.861944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.862289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.862299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.862614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.862625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.862963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.862973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.863307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.863318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.863706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.863716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.864056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.864067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.864283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.864295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.864592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.864602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.864946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.864956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.865283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.865294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.865606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.865616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.865975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.865986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.866313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.866324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.866640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.866650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.866997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.867009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.867335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.867346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.867692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.867702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.868081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.868092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.868410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.868420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.868734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.868744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.869094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.869104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.869446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.869456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.869769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.869784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.870126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.870137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.870460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.870471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.870790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.870800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.871141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.871152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.871474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.871485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.871805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.871817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.872166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.872177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.872505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.872516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.872830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.872841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.873133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.873144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.873330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.873342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.873662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.873674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.873900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.873912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.345 qpair failed and we were unable to recover it. 00:29:46.345 [2024-06-10 12:33:51.874257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.345 [2024-06-10 12:33:51.874268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.874560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.874570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.874913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.874923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.875151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.875161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.875471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.875482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.875822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.875832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.876203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.876215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.876516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.876526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.876869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.876879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.877200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.877211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.877484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.877494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.877847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.877858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.878176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.878188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.878376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.878389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.878599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.878609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.878929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.878939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.879169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.879179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.879491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.879502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.879837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.879847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.880178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.880189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.880530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.880541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.880871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.880882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.881183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.881206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.881511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.881846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.881857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.882151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.882161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.882494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.882505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.882830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.882840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.883164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.883174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.883530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.883541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.883862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.883873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.884187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.884200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.884534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.346 [2024-06-10 12:33:51.884545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.346 qpair failed and we were unable to recover it. 00:29:46.346 [2024-06-10 12:33:51.884901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.884911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.885233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.885243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.885563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.885575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.885909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.885919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.886239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.886250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.886577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.886587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.886909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.886921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.887242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.887255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.887533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.887543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.887863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.887873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.888074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.888084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.888401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.888411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.888732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.888743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.888931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.888943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.889244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.889254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.889548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.889559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.889890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.889900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.890243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.890254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.890583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.890594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.890916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.890927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.891272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.891283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.891615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.891626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.891945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.891954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.892297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.892307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.892631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.892642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.892963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.892973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.893324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.893335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.893672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.893683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.894009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.894019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.894349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.894361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.894681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.894692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.894919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.894929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.895242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.895252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.895580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.895591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.895910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.895921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.896266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.896277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.896589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.896599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.896922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.896931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.347 qpair failed and we were unable to recover it. 00:29:46.347 [2024-06-10 12:33:51.897279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.347 [2024-06-10 12:33:51.897290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.897608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.897618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.897942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.897952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.898302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.898312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.898633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.898644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.898968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.898977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.899327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.899339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.899671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.899681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.900002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.900013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.900361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.900372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.900766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.900777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.901006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.901016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.901313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.901324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.901663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.901674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.901998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.902008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.902347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.902358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.902751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.902761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.902994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.903004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.903356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.903366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.903605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.903614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.903970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.903980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.904305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.904316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.904648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.904659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.904981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.904992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.905303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.905314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.905644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.905655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.906019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.906029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.906370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.906382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.906706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.906717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.907035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.907047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.907369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.907379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.907677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.907687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.908006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.908016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.908212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.908224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.908443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.908454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.908770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.908780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.909101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.909112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.909447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.909459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.348 [2024-06-10 12:33:51.909778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.348 [2024-06-10 12:33:51.909789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.348 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.910134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.910145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.910468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.910479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.910806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.910818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.911137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.911148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.911370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.911381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.911702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.911713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.912057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.912069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.912386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.912397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.912737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.912748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.913068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.913078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.913418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.913429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.913754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.913765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.914109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.914120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.914430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.914440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.914758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.914768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.915100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.915110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.915420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.915431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.915764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.915774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.916099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.916110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.916454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.916465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.916813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.916823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.917143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.917153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.917464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.917474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.917684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.917694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.917956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.917967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.918297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.918311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.918640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.918651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.918969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.918981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.919324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.919334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.919547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.919557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.919886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.919896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.920221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.920233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.920577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.920908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.920918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.921163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.921174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.349 qpair failed and we were unable to recover it. 00:29:46.349 [2024-06-10 12:33:51.921462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.349 [2024-06-10 12:33:51.921472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.921810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.921820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.922139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.922150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.922472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.922483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.922696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.922705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.923024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.923034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.923319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.923329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.923649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.923659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.923967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.923978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.924320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.924330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.924667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.924677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.924997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.925007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.925332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.925343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.925678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.925689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.926031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.926042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.926234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.926246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.926603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.926613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.926962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.926972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.927352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.927363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.927677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.927688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.928007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.928019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.928332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.928343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.928532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.928543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.928872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.928882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.929211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.929222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.350 qpair failed and we were unable to recover it. 00:29:46.350 [2024-06-10 12:33:51.929574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.350 [2024-06-10 12:33:51.929586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.929904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.929915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.930254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.930266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.930600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.930610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.930919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.930929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.931251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.931261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.931599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.931610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.931926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.931936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.932283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.932294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.351 [2024-06-10 12:33:51.932615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.351 [2024-06-10 12:33:51.932627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.351 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.932965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.933288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.933301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.933654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.933664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.933982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.934002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.934329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.934340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.934724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.934735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.935084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.935094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.935411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.935422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.935773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.935783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.936019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.936029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.936355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.936366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.936671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.937011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.937021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.937325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.937336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.937671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.937681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.937973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.937985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.938313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.938324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.938666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.938678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.939020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.939032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.631 [2024-06-10 12:33:51.939398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.631 [2024-06-10 12:33:51.939408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.631 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.939730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.939741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.940113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.940124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.940316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.940327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.940630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.940643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.940962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.940974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.941159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.941172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.941467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.941477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.941797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.941808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.942131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.942141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.942465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.942476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.942832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.942843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.943166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.943177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.943498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.943509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.943885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.943895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.944239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.944250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.944569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.944579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.944972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.944983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.945213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.945225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.945467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.945477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.945812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.945822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.946144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.946154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.946467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.946478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.946834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.946844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.947158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.947168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.947465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.947476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.947790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.947801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.948104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.948115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.948442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.948453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.948765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.948775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.949095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.949106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.949426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.949439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.949756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.949767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.632 qpair failed and we were unable to recover it. 00:29:46.632 [2024-06-10 12:33:51.949980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.632 [2024-06-10 12:33:51.949991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.950315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.950327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.950676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.950686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.950992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.951003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.951181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.951192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.951507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.951518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.951858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.951869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.952172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.952183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.952497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.952509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.952828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.952839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.953190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.953203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.953439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.953450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.953763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.953774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.954088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.954099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.954409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.954420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.954732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.954743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.955060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.955072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.955414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.955425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.955767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.955777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.956098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.956108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.956446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.956457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.956778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.956788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.957144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.957155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.957475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.957486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.957840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.957850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.958164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.958176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.958522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.958533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.958851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.958862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.959172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.959183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.959507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.959518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.959867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.959879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.960202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.960213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.960559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.960569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.960866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.960878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.961245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.961255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.633 [2024-06-10 12:33:51.961572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.633 [2024-06-10 12:33:51.961583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.633 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.961867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.961878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.962199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.962210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.962542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.962552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.962887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.962898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.963218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.963228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.963573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.963584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.963926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.963936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.964255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.964266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.964596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.964606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.964928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.964939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.965282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.965293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.965612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.965623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.965946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.965956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.966173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.966182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.966377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.966390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.966727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.966737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.967058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.967069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.967415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.967426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.967782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.967792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.968153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.968163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.968485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.968497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.968818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.968828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.969022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.969033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.969267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.969278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.969602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.969612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.969824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.969834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.970222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.970233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.970572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.970582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.970908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.970918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.971237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.971248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.971556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.971567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.971886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.971896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.972217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.972229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.634 [2024-06-10 12:33:51.972546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.634 [2024-06-10 12:33:51.972556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.634 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.972906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.972916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.973249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.973259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.973578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.973588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.973981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.973991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.974301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.974312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.974608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.974618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.974841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.974850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.975164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.975175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.975516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.975527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.975837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.975849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.976160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.976171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.976482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.976494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.976833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.976844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.977165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.977176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.977496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.977507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.977828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.977840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.978200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.978211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.978552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.978565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.978889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.978901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.979227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.979237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.979468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.979478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.979811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.979822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.980143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.980154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.980466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.980478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.980785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.980796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.981110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.981120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.981441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.981452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.981778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.981789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.982138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.982148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.982472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.982483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.982676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.982687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.635 [2024-06-10 12:33:51.983012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.635 [2024-06-10 12:33:51.983023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.635 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.983363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.983375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.983740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.983752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.984071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.984082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.984379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.984389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.984706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.984717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.985037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.985048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.985368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.985378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.985699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.985711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.986056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.986067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.986456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.986467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.986756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.986766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.987092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.987102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.987377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.987389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.987710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.987720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.988079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.988090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.988503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.988514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.988741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.988751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.636 [2024-06-10 12:33:51.989031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.636 [2024-06-10 12:33:51.989042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.636 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.989251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.989263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.989576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.989586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.989929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.989940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.990125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.990137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.990458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.990470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.990789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.990800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.991158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.991169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.991564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.991575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.991903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.991914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.992236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.992248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.992561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.992572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.992889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.992901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.993216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.993227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.993563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.993574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.993789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.993800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.994114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.994125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.994450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.994461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.994785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.994796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.995136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.995147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.995469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.995480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.995806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.995817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.996045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.996055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.996368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.996378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.996726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.996737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.997054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.997066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.997296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.997306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.997622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.997633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.997993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.998003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.998325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.998336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.998729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.998740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.999091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.999102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.999441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.999451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:51.999762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:51.999774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.637 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 12:33:52.000094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.637 [2024-06-10 12:33:52.000105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.000419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.000430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.000822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.000833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.001142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.001153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.001466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.001477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.001826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.001837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.002157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.002168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.002492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.002504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.002826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.002838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.003184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.003202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.003400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.003411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.003750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.003761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.004082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.004093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.004417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.004428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.004736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.004747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.005066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.005076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.005423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.005434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.005791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.005801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.006121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.006131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.006451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.006463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.006783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.006793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.007139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.007149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.007475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.007487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.007804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.007814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.008135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.008146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.008472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.008482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.008797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.008808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.009123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.009134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.009502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.009513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.009819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.009830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.010151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.010163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.010485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.010496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.010819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.638 [2024-06-10 12:33:52.010830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 12:33:52.011177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.011188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.011372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.011385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.011656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.011670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.011995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.012006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.012356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.012367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.012706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.012716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.013092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.013102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.013432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.013442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.013784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.013795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.014114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.014126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.014447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.014458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.014778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.014789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.015122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.015132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.015452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.015464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.015782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.015792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.016086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.016098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.016392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.016403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.016714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.016724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.017127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.017137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.017446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.017457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.017750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.017761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.018087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.018098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.018441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.018452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.018774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.018786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.019130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.019140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.019459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.019471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.019797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.019807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.020002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.020013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.020360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.020371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.020705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.020719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.020929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.639 [2024-06-10 12:33:52.020940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 12:33:52.021263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.021273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.021586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.021597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.021922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.021932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.022254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.022265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.022582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.022593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.022930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.022940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.023297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.023309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.023648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.023659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.023980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.023992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.024338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.024348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.024670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.024681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.025021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.025031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.025403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.025415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.025755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.025765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.026083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.026094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.026408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.026419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.026757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.026768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.027115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.027125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.027450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.027462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.027779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.027789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.028113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.028124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.028445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.028456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.028738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.028748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.029068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.029079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.029418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.029429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.029775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.029788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.030114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.030125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.030449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.030460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.030787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.030798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.031137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.031148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.031338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.031350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.031674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.640 [2024-06-10 12:33:52.031685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 12:33:52.032010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.032021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.032366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.032376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.032702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.032713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.033039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.033050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.033371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.033382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.033722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.033733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.034052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.034063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.034373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.034384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.034696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.034707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.035052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.035061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.035372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.035383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.035758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.035768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.036089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.036100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.036419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.036430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.036618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.036629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.036905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.036916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.037216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.037226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.037550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.037560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.037886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.037897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.038215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.038226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.038549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.038560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.038749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.038760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.039072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.039083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.039418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.039428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.039751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.039761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.039951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.039962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.040275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.040285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.040610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.040620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.040960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.040970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.041313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.041323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.041644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.041654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.041976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.041987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.042298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.042309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.641 qpair failed and we were unable to recover it. 00:29:46.641 [2024-06-10 12:33:52.042656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.641 [2024-06-10 12:33:52.042666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.043001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.043013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.043254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.043265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.043563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.043573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.043915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.043925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.044241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.044252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.044576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.044587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.044914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.044924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.045271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.045282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.045605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.045616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.045901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.045911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.046204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.046214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.046530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.046539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.046870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.046881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.047160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.642 [2024-06-10 12:33:52.047170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.642 qpair failed and we were unable to recover it. 00:29:46.642 [2024-06-10 12:33:52.047486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.047498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.047853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.047864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.048183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.048198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.048508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.048518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.048842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.048853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.049201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.049213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.049529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.049540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.049835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.049846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.050181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.050191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.050506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.050517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.050911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.050921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.051237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.051249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.051567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.051577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.051867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.051879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.052201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.052212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.052581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.052591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.052895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.052904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.053217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.053227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.053556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.053567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.053777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.053787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.054111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.054121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.054442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.054453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.054772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.054782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.055145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.055155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.055338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.055349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.055667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.055678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.056000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.056011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.056347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.056359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.056698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.056709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.057022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.057032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.057353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.057363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.057697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.057708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.057897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.057908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.643 qpair failed and we were unable to recover it. 00:29:46.643 [2024-06-10 12:33:52.058200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.643 [2024-06-10 12:33:52.058211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.058514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.058524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.058842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.058852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.059157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.059167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.059512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.059523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.059756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.059766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.060086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.060096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.060285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.060297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.060591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.060601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.060994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.061005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.061317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.061327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.061667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.061677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.062019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.062031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.062351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.062362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.062688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.062698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.063019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.063029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.063324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.063335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.063703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.063713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.063901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.063912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.064192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.064207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.064548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.064558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.064881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.064891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.065211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.065223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.065540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.065550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.065894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.065905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.066223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.066234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.066537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.066549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.066881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.066891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.067245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.067257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.067570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.067580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.067918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.067930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.068258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.644 [2024-06-10 12:33:52.068268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-06-10 12:33:52.068610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.068621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.068939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.068949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.069264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.069274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.069617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.069627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.069976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.069987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.070311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.070321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.070535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.070545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.070863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.070873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.071213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.071225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.071564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.071574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.071899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.071910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.072232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.072242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.072565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.072575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.072944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.072955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.073146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.073157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.073494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.073505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.073847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.073859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.074203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.074214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.074541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.074551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.074740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.074750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.075076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.075086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.075428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.075440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.075751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.075762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.076090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.076102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-06-10 12:33:52.076429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.645 [2024-06-10 12:33:52.076440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-06-10 12:33:52.076764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.076775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.077095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.077106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.077438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.077449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.077789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.077799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.078109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.078121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.078308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.078321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.078641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.078652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.078879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.078890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.079118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.079128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.079449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.079459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.079777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.079787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.080138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.080149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.080472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.080482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.080803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.080814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.081152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.081163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.081475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.081485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.081810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.081821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.082141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.082152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.082460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.082473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.082814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.082825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.083159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.083170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.083488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.083499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.083828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.083839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.084152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.084163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.084489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.084500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.084812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.084824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.085147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.085157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.085502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.085514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.085904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.085915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.086230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.086242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.086582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.086592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.086937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.647 [2024-06-10 12:33:52.086948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.647 qpair failed and we were unable to recover it. 00:29:46.647 [2024-06-10 12:33:52.087270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.087280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.087671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.087681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.087868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.087879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.088190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.088203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.088536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.088546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.088868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.088878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.089201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.089211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.089545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.089556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.089905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.089915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.090237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.090247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.090576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.090586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.090934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.090945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.091292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.091614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.091626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.091979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.091989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.092241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.092251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.092575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.092585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.092907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.092919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.093226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.093237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.093551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.093562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.093881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.093892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.094216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.094226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.094565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.094576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.094917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.094927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.095284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.095295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.095653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.095664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.095975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.095985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.096174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.096185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.096487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.096498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.096827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.096837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.097162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.097173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.097514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.097525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.097844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.097855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.648 [2024-06-10 12:33:52.098174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.648 [2024-06-10 12:33:52.098185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.648 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.098515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.098526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.098879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.098890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.099207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.099220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.099526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.099536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.099879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.099890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.100240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.100250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.100469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.100482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.100800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.100811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.101000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.101011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.101316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.101327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.101652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.101663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.101983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.101993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.102316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.102327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.102701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.102712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.103024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.103035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.103371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.103382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.103722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.103733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.104112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.104122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.104429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.104441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.104778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.104788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.105127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.105138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.105451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.105461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.105778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.105788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.106108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.106120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.106445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.649 [2024-06-10 12:33:52.106456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.649 qpair failed and we were unable to recover it. 00:29:46.649 [2024-06-10 12:33:52.106755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.106766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.106966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.106977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.107293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.107304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.107609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.107620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.107964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.107975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.108296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.108307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.108638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.108648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.108970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.108982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.109322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.109333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.109674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.109685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.110002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.110012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.110339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.110350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.110693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.110703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.111023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.111033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.111303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.111314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.111684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.111695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.112008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.112019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.112338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.112349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.112576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.112586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.112909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.112919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.113260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.113272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.113537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.113547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.113880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.113891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.114220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.114232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.114423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.114434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.114779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.114790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.115108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.115118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.115443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.115455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.115766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.115777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.116096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.116107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.116449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.116459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.116786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.116796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.117138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.117149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.117468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.117480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.117726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.117737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.118097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.118107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.118414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.118424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.118744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.118755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.119072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.119083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.119426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.119437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.119779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.119789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.120111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.120121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.120443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.120453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.120668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.120678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.120985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.120996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.121336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.650 [2024-06-10 12:33:52.121346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-06-10 12:33:52.121711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.121722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.122045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.122055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.122390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.122401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.122723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.122735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.123059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.123071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.123420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.123431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.123775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.123786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.123998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.124008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.124335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.124346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.124686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.124697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.125002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.125012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.125333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.125344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.125670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.125679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.126009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.126019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.126360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.126371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.126699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.126709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.126986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.126998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.127186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.127204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.127554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.127565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.127884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.127895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.128217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.128228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.128543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.128554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.128899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.128909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.129218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.129228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.129552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.129562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.129886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.129897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.130253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.130264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.130626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.130636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.130841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.130852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.131152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.131163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.131537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.131550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.131866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.131877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.132202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.132212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.132531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.132543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.132884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.132894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.133215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.133226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.133547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.133558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.133837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.133848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.134211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.134553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.134563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.134880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.134891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.135204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.135215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.135542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.135554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.135782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.135792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.136117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.136128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.136444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.136455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-06-10 12:33:52.136802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.651 [2024-06-10 12:33:52.136812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.137131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.137142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.137475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.137486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.137791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.137802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.138143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.138154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.138475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.138487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.138805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.138815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.139126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.139137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.139447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.139458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.139791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.139802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.140120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.140131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.140452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.140462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.140774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.140785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.141107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.141118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.141438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.141449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.141770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.141781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.142123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.142135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.142437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.142449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.142768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.142779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.143105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.143116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.143440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.143452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.143783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.143794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.144116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.144127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.144451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.144463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.144808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.144819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.145140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.145151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.145471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.145483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.145803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.145814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.146188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.146202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.146493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.146503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.146803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.146814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.147135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.147146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.147460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.147471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.147804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.147815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.148138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.148149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.148457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.148469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.148810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.148822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.149144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.149155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.149475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.149487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.149812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.149823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.150163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.150174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.150506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.150518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.150837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.150848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.151150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.151162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.151500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.151512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.151833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.151844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.152164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.152175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.152491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.152503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.152847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.152859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.153177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.153188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.153520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.153532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.153858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.153869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.154222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.154236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.154571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.154582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.154900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.154911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.155232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.155243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.155549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.155559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.155885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.155895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.156219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.156231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.156571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.156581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.156921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.156932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.157252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.652 [2024-06-10 12:33:52.157262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.652 qpair failed and we were unable to recover it. 00:29:46.652 [2024-06-10 12:33:52.157600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.157610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.157921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.157932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.158272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.158283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.158681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.158692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.159007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.159018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.159343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.159353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.159704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.159714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.160032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.160042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.160354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.160365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.160702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.160713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.161055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.161066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.161328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.161338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.161642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.161653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.161972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.161982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.162298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.162310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.162682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.162692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.163015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.163026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.163204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.163219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.163468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.163478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.163804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.163815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.164152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.164162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.164474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.164484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.164830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.164841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.165159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.165171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.165497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.165507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.166665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.166690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.167024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.167036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.167359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.167371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.167712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.167722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.169022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.169042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.169357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.169368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.169710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.169720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.170058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.170067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.170380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.170389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.170681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.170690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.171008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.171016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.171344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.171353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.171563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.171571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.171890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.171899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.172216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.172226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.172553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.172561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.172896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.172905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.173125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.173134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.173487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.173496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.173837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.173848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.174183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.174193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.174548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.174558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.174907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.175213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.175223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.175565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.175574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.175896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.175905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.176201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.176211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.176540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.176549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.176895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.176904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.177234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.177243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.177580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.653 [2024-06-10 12:33:52.177589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.653 qpair failed and we were unable to recover it. 00:29:46.653 [2024-06-10 12:33:52.177927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.177937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.178280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.178290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.178625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.178635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.178958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.178967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.179271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.179281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.179490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.179503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.179829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.179838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.180159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.180168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.180481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.180492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.180843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.180852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.181173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.181182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.181527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.181536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.181881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.181890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.182230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.182239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.182537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.182547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.182831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.182840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.183183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.183193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.183547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.183556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.183716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.183725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.184111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.184121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.184437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.184447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.184692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.184702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.185000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.185009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.185326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.185338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.185715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.185724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.186015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.186025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.186362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.186372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.186763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.186773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.187067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.187077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.187311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.187321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.187646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.187655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.187990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.187999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.188322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.188333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.188710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.188720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.189027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.189036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.189304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.189314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.189638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.189647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.189991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.190000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.190235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.190245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.190571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.190580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.190797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.190806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.191019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.191029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.191327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.191337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.191679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.191688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.191910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.191919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.192261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.192271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.192631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.192641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.192994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.193003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.193374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.193384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.193723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.193733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.194088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.194459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.194470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.194726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.194735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.195063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.195072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.195380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.195390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.195637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.195646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.195962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.195974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.196319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.196329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.196655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.196664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.196984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.196993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.197108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.197117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.197770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.197790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.198131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.198141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.198444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.198454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.198775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.198785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.199126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.199136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.199455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.199465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.199784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.199796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.200120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.200130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.200369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.200379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.200711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.654 [2024-06-10 12:33:52.200721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.654 qpair failed and we were unable to recover it. 00:29:46.654 [2024-06-10 12:33:52.201037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.201046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.201371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.201381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.201630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.201640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.201948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.201957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.202275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.202284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.202603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.202613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.202960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.202969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.203309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.203319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.203646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.203655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.203840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.203851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.204206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.204216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.204518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.204527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.204845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.204857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.205178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.205188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.205581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.205591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.205933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.205942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.206346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.206355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.206686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.206695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.206888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.206898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.207249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.207259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.207597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.207607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.207846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.207857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.208073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.208082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.208399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.208409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.208618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.208627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.208873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.208882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.209207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.209217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.209568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.209577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.209924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.210246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.210256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.210549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.210558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.210886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.210896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.211235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.211245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.211530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.211539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.211865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.211874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.212122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.212131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.212526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.212536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.212864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.212873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.213186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.655 [2024-06-10 12:33:52.213198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.655 qpair failed and we were unable to recover it. 00:29:46.655 [2024-06-10 12:33:52.213508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.390861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.391353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.391710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.391722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.392084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.392096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.392460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.392477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.392810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.392822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.393163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.393174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.393530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.393543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.393844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.393856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.394186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.394206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.394445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.394460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.394815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.394827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.395058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.395070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.395490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.395502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.395814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.395826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.396179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.396190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.396474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.396486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.396890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.396903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.397218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.397230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.397587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.397600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.397819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.397832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.398217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.398228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.398566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.398577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.398901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.398913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.399250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.399262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.399570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.399582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.399889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.399900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.400234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.400246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.400467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.400478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.400827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.400839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.401200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.401212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.401516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.401528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.401824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.922 [2024-06-10 12:33:52.401836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.922 qpair failed and we were unable to recover it. 00:29:46.922 [2024-06-10 12:33:52.402171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.402183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.402539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.402551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.402881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.402894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.403276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.403288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.403634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.403645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.404030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.404042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.404375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.404389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.404735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.404747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.405081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.405095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.405224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.405233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.405590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.405602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.405940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.405951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.406285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.406298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.406651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.406663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.407005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.407017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.407386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.407398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.407785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.407797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.408121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.408135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.408531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.408543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.408875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.408889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.409211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.409224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.409549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.409561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.409898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.410234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.410245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.410500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.410512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.410830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.410841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.411166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.411178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.411356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.411369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.411738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.411750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.412031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.412043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.412357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.412368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.412540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.412552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.412896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.412909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.413262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.413276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.413580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.413596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.413939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.413954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.414287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.414299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.414626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.414639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.923 qpair failed and we were unable to recover it. 00:29:46.923 [2024-06-10 12:33:52.414863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.923 [2024-06-10 12:33:52.414874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.415214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.415226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.415588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.415601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.415901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.415913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.416248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.416261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.416606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.416620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.416803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.416816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.417163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.417174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.417420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.417435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.417794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.417806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.418140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.418152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.418508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.418520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.418756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.418769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.418987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.418999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.419218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.419233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.419571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.419583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.419941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.419953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.420053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.420064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.420407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.420420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.420771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.420783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.421116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.421129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.421304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.421317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.421675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.421688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.421922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.421935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.422247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.422261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.422586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.422598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.422801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.422812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.423136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.423148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.423397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.423411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.423722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.423734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.424070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.424083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.424511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.424524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.424835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.424847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.425069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.425079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.425319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.425331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.425685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.425698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.426038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.426051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.426307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.426320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.426678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.924 [2024-06-10 12:33:52.426690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.924 qpair failed and we were unable to recover it. 00:29:46.924 [2024-06-10 12:33:52.427047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.427059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.427395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.427407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.427794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.427805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.428033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.428044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.428387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.428399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.428729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.428741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.429074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.429086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.429525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.429539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.429778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.429789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.430065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.430077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.430394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.430405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.430731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.430744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.431058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.431070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.431287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.431300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.431669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.431681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.432033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.432046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.432370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.432382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.432728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.432740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.433077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.433090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.433500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.433512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.433869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.433883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.434217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.434229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.434539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.434552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.434805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.434817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.435124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.435136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.435456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.435469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.435804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.435816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.436142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.436154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.436485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.436497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.436831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.436845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.437188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.437210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.437524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.437537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.437782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.437794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.438018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.438029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.438355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.438370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.438728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.438740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.439094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.439106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.439441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.439452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.925 qpair failed and we were unable to recover it. 00:29:46.925 [2024-06-10 12:33:52.439786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.925 [2024-06-10 12:33:52.439800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.440034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.440047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.440386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.440400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.440731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.440743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.441074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.441088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.441391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.441403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.441731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.441745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.441939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.441951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.442244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.442255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.442588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.442600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.442964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.442976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.443327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.443339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.443684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.443696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.443988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.444000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.444355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.444366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.444711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.444729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.445074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.445086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.445426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.445439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.445770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.445783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.446142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.446155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.446497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.446511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.446849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.446862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.447203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.447216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.447602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.447615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.447964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.447977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.448332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.448345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.448672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.448684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.449050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.449062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.449406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.449418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.449768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.449781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.926 [2024-06-10 12:33:52.450044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.926 [2024-06-10 12:33:52.450056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.926 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.450370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.450383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.450717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.450728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.450963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.450974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.451297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.451310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.451642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.451653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.451985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.451997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.452336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.452348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.452741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.452753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.453085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.453096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.453445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.453458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.453751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.453762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.454093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.454106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.454429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.454440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.454772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.454784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.455116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.455128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.455465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.455478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.455806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.455818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.456142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.456156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.456510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.456522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.456867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.456881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.457216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.457230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.457604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.457615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.457844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.457855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.458198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.458212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.458433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.458444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.458673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.458685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.458899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.458910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.459237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.459250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.459592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.459606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.459940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.459953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.460247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.460259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.460599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.460610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.460937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.460950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.461285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.461297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.461635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.461648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.461992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.462003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.462329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.462343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.462730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.462741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.927 qpair failed and we were unable to recover it. 00:29:46.927 [2024-06-10 12:33:52.463076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.927 [2024-06-10 12:33:52.463089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.463419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.463432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.463758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.463772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.463868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.463880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.464172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.464185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.464418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.464430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.464764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.464777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.465108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.465121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.465506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.465518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.465873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.465887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.466235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.466247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.466592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.466604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.466929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.466942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.467222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.467235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.467549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.467562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.467783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.467795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.468090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.468102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.468436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.468448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.468772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.468785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.469035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.469047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.469273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.469284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.469499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.469511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.469843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.469854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.470162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.470174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.470466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.470478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.470824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.470837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.471158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.471170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.471373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.471384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.471681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.471693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.472004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.472017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.472234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.472246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.472616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.472630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.472868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.472880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.473203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.473218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.473555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.473567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.473877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.473889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.474107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.474119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.474448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.474459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.474697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.474709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.928 qpair failed and we were unable to recover it. 00:29:46.928 [2024-06-10 12:33:52.474908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-06-10 12:33:52.474920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.475261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.475274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.475517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.475531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.475698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.475711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.476047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.476058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.476304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.476315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.476639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.476651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.476974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.476986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.477180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.477192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.477479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.477491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.477838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.477850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.478171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.478184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.478517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.478529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.478810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.478822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.479010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.479022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.479354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.479368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.479712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.479723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.480053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.480066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.480321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.480334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.480662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.480675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.480979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.480992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.481100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.481112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.481448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.481459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.481808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.481822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.482201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.482215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.482580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.482592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.482902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.482913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.483267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.483279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.483570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.483584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.483817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.483832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.484117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.484128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.484233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.484245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.484573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.484585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.484942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.484955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.485151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.485164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.485544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.485558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.485898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.485910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.486233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.486246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.486574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-06-10 12:33:52.486585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.929 qpair failed and we were unable to recover it. 00:29:46.929 [2024-06-10 12:33:52.486928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.486940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.487292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.487304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.487638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.487651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.488037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.488049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.488299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.488310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.488682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.488694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.489031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.489044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.489278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.489290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.489592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.489603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.489831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.489844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.490177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.490189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.490523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.490534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.490880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.490894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.491223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.491235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.491606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.491619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.491951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.491962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.492283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.492296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.492622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.492638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.492975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.492986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.493357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.493369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.493662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.493673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.493994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.494005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.494359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.494372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.494562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.494575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.494705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.494715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.495032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.495045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.495378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.495390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.495670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.495681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.495900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.495912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.496258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.496271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.496592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.496604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.496937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.496951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.497281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.497294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.497639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.497652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.497980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.497993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.498266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.498278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.498687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.498699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.499047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-06-10 12:33:52.499059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.930 qpair failed and we were unable to recover it. 00:29:46.930 [2024-06-10 12:33:52.499397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.499409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.499724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.499735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.500058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.500070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.500308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.500319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.500651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.500662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.500864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.500875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.501247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.501260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.501606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.501617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.501915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.501927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.502113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.502124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.502584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.502596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.502844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.502856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.503169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.503181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.503591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.503604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.503920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.503933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.504268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.504281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.504619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.504632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.504938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.504950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.505279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.505290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.505634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.505647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.505980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.505995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.506327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.506340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.506679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.506690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.506981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.506993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.507331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.507342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.507639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.507653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.507844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.507857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.508186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.508208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.508519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.508531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.508859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.508873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.509218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.509231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.509554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.509566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.931 qpair failed and we were unable to recover it. 00:29:46.931 [2024-06-10 12:33:52.509901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-06-10 12:33:52.509914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.510261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.510274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.510614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.510625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.510952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.510967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.511283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.511294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.511653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.511666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.512007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.512020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.512328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.512340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.512685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.512699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.513015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.513027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.513357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.513370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.513784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.513796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.514115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.514128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.514465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.514477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.514802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.514816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.515141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.515155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.515452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.515466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.515774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.515786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.516076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.516087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.516394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.516407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.516753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.516764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.517109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.517121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.517441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.517454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.517785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.517796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.518122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.518135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.518447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.518460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.518681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.518693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.519037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.519050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.519386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.519398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.519715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.519729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.520076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.520090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:46.932 [2024-06-10 12:33:52.520408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.932 [2024-06-10 12:33:52.520422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:46.932 qpair failed and we were unable to recover it. 00:29:47.227 [2024-06-10 12:33:52.520763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.227 [2024-06-10 12:33:52.520778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.227 qpair failed and we were unable to recover it. 00:29:47.227 [2024-06-10 12:33:52.521128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.227 [2024-06-10 12:33:52.521142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.227 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.521453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.521466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.521826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.521841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.522166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.522179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.522403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.522414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.522748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.522759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.523100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.523113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.523430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.523443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.523665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.523676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.523907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.523921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.524261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.524275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.524623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.524635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.524992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.525004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.526079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.526107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.526442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.526455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.526686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.526698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.527013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.527027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.527386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.527398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.527742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.527754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.528083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.528095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.528428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.528440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.528642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.528654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.528995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.529007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.529334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.529347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.529567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.529579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.529907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.529918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.530282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.530294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.530628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.530640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.530863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.530874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.531222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.531234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.531578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.531591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.531927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.531937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.532287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.532299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.532619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.532631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.228 qpair failed and we were unable to recover it. 00:29:47.228 [2024-06-10 12:33:52.532968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.228 [2024-06-10 12:33:52.532978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.533312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.533325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.533676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.533687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.534015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.534027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.534346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.534357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.534565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.534576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.534919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.534930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.535256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.535269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.535594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.535604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.535933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.535944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.536135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.536148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.536490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.536502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.536828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.536840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.537166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.537177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.537531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.537542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.537871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.537881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.538207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.538218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.538568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.538580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.538903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.538915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.539245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.539257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.539548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.539559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.539876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.539888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.540234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.540245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.540584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.540594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.540919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.540931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.541257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.541268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.541614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.541626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.541952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.541964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.542292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.542304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.542648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.542660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.542973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.542985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.543317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.543330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.543680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.543692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.544020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.544031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.229 [2024-06-10 12:33:52.544380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.229 [2024-06-10 12:33:52.544391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.229 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.544730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.544741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.545078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.545089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.545451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.545464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.545809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.545821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.546160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.546171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.546497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.546508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.546833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.546844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.547189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.547206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.547514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.547527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.547853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.547865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.548199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.548211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.548533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.548544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.548871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.548883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.549217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.549229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.549583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.549594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.549937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.549948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.550240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.550251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.550568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.550580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.550862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.550874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.551225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.551236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.551568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.551580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.551893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.551904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.552233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.552244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.552581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.552594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.552911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.552922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.553112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.553124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.553407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.553417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.553664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.553675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.553937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.553949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.554162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.554173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.554525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.554536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.554890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.554903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.555224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.555237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.555561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.555572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.555900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.555911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.556256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.556270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.556606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.556618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.230 qpair failed and we were unable to recover it. 00:29:47.230 [2024-06-10 12:33:52.556931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.230 [2024-06-10 12:33:52.556944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.557276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.557288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.557587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.557598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.557924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.557934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.558184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.558208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.558538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.558550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.558866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.558879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.559224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.559236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.559584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.559596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.559925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.559937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.560285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.560298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.560622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.560633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.560977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.560989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.561315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.561327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.561673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.561685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.562030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.562041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.562275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.562286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.562572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.562583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.562929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.562940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.563270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.563283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.563607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.563617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.563943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.563953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.564297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.564310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.564662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.564673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.565022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.565034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.565359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.565372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.565745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.565756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.566064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.566076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.566385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.566396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.566739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.566750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.567098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.567109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.567342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.567352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.567676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.567688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.568013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.568025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.568373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.568384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.568736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.568747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.569078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.569089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.231 qpair failed and we were unable to recover it. 00:29:47.231 [2024-06-10 12:33:52.569430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.231 [2024-06-10 12:33:52.569441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.569788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.569800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.570119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.570132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.570460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.570471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.570775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.570787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.571121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.571131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.571528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.571539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.571857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.571868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.572192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.572209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.572534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.572545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.572861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.572872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.573190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.573207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.573495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.573506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.573830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.573841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.574209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.574221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.574555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.574567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.574761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.574773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.575026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.575037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.575366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.575377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.575723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.575734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.576057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.576068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.576405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.576417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.576741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.576752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.577069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.577081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.577387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.577398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.577745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.577756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.578087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.578099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.578470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.578482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.578716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.578727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.579031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.579044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.579361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.579373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.232 [2024-06-10 12:33:52.579753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.232 [2024-06-10 12:33:52.579764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.232 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.580082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.580094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.580425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.580436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.580763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.580774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.581005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.581017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.581318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.581329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.581681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.581692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.582030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.582041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.582366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.582377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.582702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.582714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.583060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.583072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.583394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.583405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.583741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.583752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.584080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.584091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.584426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.584437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.584765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.584776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.585101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.585113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.585442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.585454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.585797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.585808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.586134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.586145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.586475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.586487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.586861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.586872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.587220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.587231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.587586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.587598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.587925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.587936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.588132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.588146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.588510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.588521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.588861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.588873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.589198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.589211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.589516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.589527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.589839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.589851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.590178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.590188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.590426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.590437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.590759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.590770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.591123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.591135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.591462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.591474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.591799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.591809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.592198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.592209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.592537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.233 [2024-06-10 12:33:52.592549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.233 qpair failed and we were unable to recover it. 00:29:47.233 [2024-06-10 12:33:52.592879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.592890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.593213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.593225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.593549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.593560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.593877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.593889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.594210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.594221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.594610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.594621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.594939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.594951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.595308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.595319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.595666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.595678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.596003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.596014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.596337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.596349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.596692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.596702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.597047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.597058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.597381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.597395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.597734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.597746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.597983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.597993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.598319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.598330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.598658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.598670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.598995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.599006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.599350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.599362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.599705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.599717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.600039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.600050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.600376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.600387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.600737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.600748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.601070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.601080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.601433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.601447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.601777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.601790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.602106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.602117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.602369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.602380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.602723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.602734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.603037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.603048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.603377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.603388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.603726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.603737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.604048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.604060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.604409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.604420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.604762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.604773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.605091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.605102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.605443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.605454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.234 [2024-06-10 12:33:52.605777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.234 [2024-06-10 12:33:52.605789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.234 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.606172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.606184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.606491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.606503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.606847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.606859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.607185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.607201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.607534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.607545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.607877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.607887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.608151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.608161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.608472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.608484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.608829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.608839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.609162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.609173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.609492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.609503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.609828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.609840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.610188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.610202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.610425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.610436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.610783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.610794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.611113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.611125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.611443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.611455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.611783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.611794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.612003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.612014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.612336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.612349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.612671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.612681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.613007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.613019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.613342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.613352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.613696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.613707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.614036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.614047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.614405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.614416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.614604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.614616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.614951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.614962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.615311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.615322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.615667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.615678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.615977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.615987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.616308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.616320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.616670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.616680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.616929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.616940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.617235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.617246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.617571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.617583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.617963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.617973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-06-10 12:33:52.618288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.235 [2024-06-10 12:33:52.618300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.618665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.618675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.618998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.619008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.619387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.619398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.619632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.619643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.619941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.619954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.620276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.620288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.620633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.620644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.620966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.620977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.621304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.621315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.621658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.621670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.621904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.621915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.622297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.622309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.622634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.622646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.622971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.622983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.623290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.623301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.623629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.623641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.623961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.623972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.624296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.624308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.624656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.624667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.624988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.624999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.625324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.625335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.625527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.625539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.625845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.625857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.626043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.626055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.626386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.626397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.626725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.626737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.627078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.627090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.627332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.627344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.627665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.627677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.628001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.628013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.628357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.628370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.628713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.628727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.629053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.629064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.629412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.629423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.629802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.629813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.630133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.630144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.630470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.630480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-06-10 12:33:52.630836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.236 [2024-06-10 12:33:52.630847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.631192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.631207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.631505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.631518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.631841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.631852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.632180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.632191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.632526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.632537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.632849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.632860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.633147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.633158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.633468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.633479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.633809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.633820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.634218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.634229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.634547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.634558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.634884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.634896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.635242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.635254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.635574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.635585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.635799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.635810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.636122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.636133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.636456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.636467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.636793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.636804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.637125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.637135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.637463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.637474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.637817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.637829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.638055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.638066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.638383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.638394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.638734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.638745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.639094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.639105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.639444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.639455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.639817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.639827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.640155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.640166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.640517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.640529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.640850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.640861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.641160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.641171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.237 qpair failed and we were unable to recover it. 00:29:47.237 [2024-06-10 12:33:52.641495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.237 [2024-06-10 12:33:52.641506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.641833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.641845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.642171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.642183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.642457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.642468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.642805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.642817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.643165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.643176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.643498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.643510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.643836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.643847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.644175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.644188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.644566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.644577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.644897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.644908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.645255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.645267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.645578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.645588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.645930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.645941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.646264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.646275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.646602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.646613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.646933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.646944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.647300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.647312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.647614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.647625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.647851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.647861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.648184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.648198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.648518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.648529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.648841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.648852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.649210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.649221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.649532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.649543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.649887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.649898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.650086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.650097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.650399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.650410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.650726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.650739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.651083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.651094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.651445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.651458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.651815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.651827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.652155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.652167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.652513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.652524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.652847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.652858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.653185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.653202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.653546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.653557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.653902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.653913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.654237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.238 [2024-06-10 12:33:52.654249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.238 qpair failed and we were unable to recover it. 00:29:47.238 [2024-06-10 12:33:52.654585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.654596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.654824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.654834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.655161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.655172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.655570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.655581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.655923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.655934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.656257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.656269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.656627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.656638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.656958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.656970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.657184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.657199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.657510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.657522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.657867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.657878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.658204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.658216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.658560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.658571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.658787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.658798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.659111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.659122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.659457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.659469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.659681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.659692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.660018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.660029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.660375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.660389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.660708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.660719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.661092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.661104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.661450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.661462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.661659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.661671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.662010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.662021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.662348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.662359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.662687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.662697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.663020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.663031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.663351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.663362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.663685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.663696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.664018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.664028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.664381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.664393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.664713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.664724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.665047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.665058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.665380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.665391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.665588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.665600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.665906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.665918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.666238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.666249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.666492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.666503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.239 [2024-06-10 12:33:52.666850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.239 [2024-06-10 12:33:52.666860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.239 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.667093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.667104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.667437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.667447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.667809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.667820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.668761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.668784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.669150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.669162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.670209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.670234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.670563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.670578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.670921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.670932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.671229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.671240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.671573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.671583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.671903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.671913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.672258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.672271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.672599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.672609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.672931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.672942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.673264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.673274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.673616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.673627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.673950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.673961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.674155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.674168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.674502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.674513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.674857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.674868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.675203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.675214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.675450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.675461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.675794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.675805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.676131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.676142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.676462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.676473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.676802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.676814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.677138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.677148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.677491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.677502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.677824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.677834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.678173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.678183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.678491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.678502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.678854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.678865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.679203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.679215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.679560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.679571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.679895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.679906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.680254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.680265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.680583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.680594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.680786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.240 [2024-06-10 12:33:52.680796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.240 qpair failed and we were unable to recover it. 00:29:47.240 [2024-06-10 12:33:52.681119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.681131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.681450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.681462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.681763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.681774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.682095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.682105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.682448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.682459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.682804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.682815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.683255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.683266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.683575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.683586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.683927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.683937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.684283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.684295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.684527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.684537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.684893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.684904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.685232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.685244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.685568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.685578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.685928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.685939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.686233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.686245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.686557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.686568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.686875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.686887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.687229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.687240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.687579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.687590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.687915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.687927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.688272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.688283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.688635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.688645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.688972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.688983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.689206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.689216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.689393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.689405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.689732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.689742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.690074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.690085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.690501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.690512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.690811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.690822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.691220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.691232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.691573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.691585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.691913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.691923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.692235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.692246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.692580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.692591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.692901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.692912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.693125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.241 [2024-06-10 12:33:52.693137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.241 qpair failed and we were unable to recover it. 00:29:47.241 [2024-06-10 12:33:52.693458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.693471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.693785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.693795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.693980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.693990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.694296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.694306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.694635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.694647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.694874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.694885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.695200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.695212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.695527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.695538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.695879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.695890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.696241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.696252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.696505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.696515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.696863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.696874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.697055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.697065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.697345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.697356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.697707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.697719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.698077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.698087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.698372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.698383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.698689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.698700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.699042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.699053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.699273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.699284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.699672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.699683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.700028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.700038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.700367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.700379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.700710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.700721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.700922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.700932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.701331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.701342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.701663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.701675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.701996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.702006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.702391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.702402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.702756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.702767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.702993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.703004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.703299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.703310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.703642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.703654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.242 [2024-06-10 12:33:52.703908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.242 [2024-06-10 12:33:52.703918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.242 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.704287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.704299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.704387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.704397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.704695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.704707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.705023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.705033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.705261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.705272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.705603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.705614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.705939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.705950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.706165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.706176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.706529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.706540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.706859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.706869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.707085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.707096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.707438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.707450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.707725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.707736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.708064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.708074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.708390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.708401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.708742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.708753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.709098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.709109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.709437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.709449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.709762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.709772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.710088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.710099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.710429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.710440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.710755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.710767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.711091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.711102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.711453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.711465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.711815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.711826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.712151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.712162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.712456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.712468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.712686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.712697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.713028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.713039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.713259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.713270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.713598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.713610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.713926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.713937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.714276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.714287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.714617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.714627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.714942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.714953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.715374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.715385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.715696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.243 [2024-06-10 12:33:52.715706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.243 qpair failed and we were unable to recover it. 00:29:47.243 [2024-06-10 12:33:52.715929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.715939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.716241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.716252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.716572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.716582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.716935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.716946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.717265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.717276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.717603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.717613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.717931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.717942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.718246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.718258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.718524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.718535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.718849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.718859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.719093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.719103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.719442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.719453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.719783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.719794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.720114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.720125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.720441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.720452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.720793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.720805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.721091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.721102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.721426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.721438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.721778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.721789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.722138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.722150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.722427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.722438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.722761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.722772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.722996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.723007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.723257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.723270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.723602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.723614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.723928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.723938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.724256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.724268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.724603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.724614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.724734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.724744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.725081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.725091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.725447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.725458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.725782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.725793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.726077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.726086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.726392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.726403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.726751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.726761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.727064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.727075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.727420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.727431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.727661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.727672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.728043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.244 [2024-06-10 12:33:52.728053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.244 qpair failed and we were unable to recover it. 00:29:47.244 [2024-06-10 12:33:52.728220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.728231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.728362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.728373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.728693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.728703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.729002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.729013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.729235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.729246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.729704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.729715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.730032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.730043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.730389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.730399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.730587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.730598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.730917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.730927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.731242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.731254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.731584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.731599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.731901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.731912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.732229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.732240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.732448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.732458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.732793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.732804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.733094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.733106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.733448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.733459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.733773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.733784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.733977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.733987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.734212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.734224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.734524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.734534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.734861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.734872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.735233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.735245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.735604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.735615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.735740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.735750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.736084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.736095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.736307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.736317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.736677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.736688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.737038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.737049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.737393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.737404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.737721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.737733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.737948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.737959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.738288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.738298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.738627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.738637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.738965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.738975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.739323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.739334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.739627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.739637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.245 qpair failed and we were unable to recover it. 00:29:47.245 [2024-06-10 12:33:52.739860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.245 [2024-06-10 12:33:52.739872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.740212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.740223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.740573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.740583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.740760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.740770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.741127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.741137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.741345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.741356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.741721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.741731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.742059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.742070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.742411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.742421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.742746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.742757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.743089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.743100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.743335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.743346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.743563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.743574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.743778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.743788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.744115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.744126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.744457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.744468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.744687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.744698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.744917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.744928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.745255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.745266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.745564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.745574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.745897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.745907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.746253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.746264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.746594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.746604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.746933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.746943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.747351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.747365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.747699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.747712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.748004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.748014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.748277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.748288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.748580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.748591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.748930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.748941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.749291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.749302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.749631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.749642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.749973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.246 [2024-06-10 12:33:52.749983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.246 qpair failed and we were unable to recover it. 00:29:47.246 [2024-06-10 12:33:52.750213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.750223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.750647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.750657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.750869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.750879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.751219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.751230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.751346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.751356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.751685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.752013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.752024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.752356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.752367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.752703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.752715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.753059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.753069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.753506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.753517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.753805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.753817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.754148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.754158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.754496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.754508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.754841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.754851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.755067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.755077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.755390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.755400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.755706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.755718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.756046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.756056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.756389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.756401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.756717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.756728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.756952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.756962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.757320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.757331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.757664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.757676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.757867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.757878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.758087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.758098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.758404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.758414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.758738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.758750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.759075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.759086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.759443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.759454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.759643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.759654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.759987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.759998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.760326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.760337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.760660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.760670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.760953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.760963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.761326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.761338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.761669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.761680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.762008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.762018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.247 qpair failed and we were unable to recover it. 00:29:47.247 [2024-06-10 12:33:52.762341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.247 [2024-06-10 12:33:52.762352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.762676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.762686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.762940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.762950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.763343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.763354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.763579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.763589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.763883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.763893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.764223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.764235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.764669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.764680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.765006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.765017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.765350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.765362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.765658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.765668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.765916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.765926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.766266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.766276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.766672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.766683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.767000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.767011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.767213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.767225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.767544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.767555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.767881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.767892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.768223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.768234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.768563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.768575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.768917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.768928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.769197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.769209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.769533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.769544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.769903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.769914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.770114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.770128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.770507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.770519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.770845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.770857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.771209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.771220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.771529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.771540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.771866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.771878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.772206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.772218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.772460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.772471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.772802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.772813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.773158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.773169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.773436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.773448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.773754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.773765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.774094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.774105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.774487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.774499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.774838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.248 [2024-06-10 12:33:52.774849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.248 qpair failed and we were unable to recover it. 00:29:47.248 [2024-06-10 12:33:52.775192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.775214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.775577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.775590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.775916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.775928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.776254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.776266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.776608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.776620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.776948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.776958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.777245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.777263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.777588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.777601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.777917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.777927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.778151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.778163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.778530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.778542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.778756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.778767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.779101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.779112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.779411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.779423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.779747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.779759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.780076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.780088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.780466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.780477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.780785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.780796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.781121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.781132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.781353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.781365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.781702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.781714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.782044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.782055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.782405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.782416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.782702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.782713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.783008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.783019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.783270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.783281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.783633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.783643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.783970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.783982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.784327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.784338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.784610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.784622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.784835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.784845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.785161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.785171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.785304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.785314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.785652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.785662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.249 qpair failed and we were unable to recover it. 00:29:47.249 [2024-06-10 12:33:52.785983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.249 [2024-06-10 12:33:52.785994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.786309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.786320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.786622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.786632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.786948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.786959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.787277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.787288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.787624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.787635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.787948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.787960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.788146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.788158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.788377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.788387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.788712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.788722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.789062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.789074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.789383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.789394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.789705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.789715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.790034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.790044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.790365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.790376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.790703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.790714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.791026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.791036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.791372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.791382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.791679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.791690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.792062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.792075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.792369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.792380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.792725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.792736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.793083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.793094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.793444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.793455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.793781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.793791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.794202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.794213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.794539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.794551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.794859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.794869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.795730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.795752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.796079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.796091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.796310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.796321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.796640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.796652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.796995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.797006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.797387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.797398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.797709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.797720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.798088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.798099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.798420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.798431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.798746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.798757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.250 [2024-06-10 12:33:52.799122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.250 [2024-06-10 12:33:52.799132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.250 qpair failed and we were unable to recover it. 00:29:47.251 [2024-06-10 12:33:52.799446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.251 [2024-06-10 12:33:52.799457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.251 qpair failed and we were unable to recover it. 00:29:47.251 [2024-06-10 12:33:52.799779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.251 [2024-06-10 12:33:52.799789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.251 qpair failed and we were unable to recover it. 00:29:47.251 [2024-06-10 12:33:52.800127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.251 [2024-06-10 12:33:52.800137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.251 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.800381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.800395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.800709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.800720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.800943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.800953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.801179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.801190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.801561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.801574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.801888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.801898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.802237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.802248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.802616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.802627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.802982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.802993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.532 [2024-06-10 12:33:52.803320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.532 [2024-06-10 12:33:52.803330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.532 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.803656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.803667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.803981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.803994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.804304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.804316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.804644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.804656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.804975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.804986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.805335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.805346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.805647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.805657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.805850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.805860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.806203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.806213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.806586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.806596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.806819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.806829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.806913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.806924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.807278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.807289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.807593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.807605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.807920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.807930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.808330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.808342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.808684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.808696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.809059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.809069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.809416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.809428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.809776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.809787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.810110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.810121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.810461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.810473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.810803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.810814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.811114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.811126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.811356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.811367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.811740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.811751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.812087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.812098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.812467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.812479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.812694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.812705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.813038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.813049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.813187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.813212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.813506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.813516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.813861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.813872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.814203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.814214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.814549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.814560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.814867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.814878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.815244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.815255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.815598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.533 [2024-06-10 12:33:52.815610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.533 qpair failed and we were unable to recover it. 00:29:47.533 [2024-06-10 12:33:52.815795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.815806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.816204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.816216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.816791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.816810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.817137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.817147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.817456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.817468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.817766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.817776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.818139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.818150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.818559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.818571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.818977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.818988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.819333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.819345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.819591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.819603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.819932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.819944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.820320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.820331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.820627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.820638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.820998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.821009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.821202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.821212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.821510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.821521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.821830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.821841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.822074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.822084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.822289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.822302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.822570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.822581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.822924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.822934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.823291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.823302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.823599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.823609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.823918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.823928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.824117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.824127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.824456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.824467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.824794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.824806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.825005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.825017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.825336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.825347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.825676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.825686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.825904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.825914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.826254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.826265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.826555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.826567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.826967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.826977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.827310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.827321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.827649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.827661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.534 qpair failed and we were unable to recover it. 00:29:47.534 [2024-06-10 12:33:52.827993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.534 [2024-06-10 12:33:52.828004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.828365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.828376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.828668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.828679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.828886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.828896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.828997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.829006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.829316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.829328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.829536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.829546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.829765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.829775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.830108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.830119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.830342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.830353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.830692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.830703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.831064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.831075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.831215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.831226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.831584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.831594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.831938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.831951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.832191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.832206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.832472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.832482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.832803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.832814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.833038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.833048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.833355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.833367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.833720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.833732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.833948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.833959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.834290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.834301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.834642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.834653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.834919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.834929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.835163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.835174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.835534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.835545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.835865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.835876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.836206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.836218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.836491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.836503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.836858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.836869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.837200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.837212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.837451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.837461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.837781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.837792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.838137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.838147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.838511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.838522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.838848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.838859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.839169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.839180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.535 [2024-06-10 12:33:52.839435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.535 [2024-06-10 12:33:52.839446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.535 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.839774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.839785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.840102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.840112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.840391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.840403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.840703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.840714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.841029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.841040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.841396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.841408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.841744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.841754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.842080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.842091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.842529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.842823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.842835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.843065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.843076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.843425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.843435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.843758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.843769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.844112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.844123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.844488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.844499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.844841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.844852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.845184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.845199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.845531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.845542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.845862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.845873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.846224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.846235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.846601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.846612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.846873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.846884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.847203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.847214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.847589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.847600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.847947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.847958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.848243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.848253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.848613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.848623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.848935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.848946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.849259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.849270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.849491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.849502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.849832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.849843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.850191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.850207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.850590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.850601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.850944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.850955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.851184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.851199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.851414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.851424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.851768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.536 [2024-06-10 12:33:52.851779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.536 qpair failed and we were unable to recover it. 00:29:47.536 [2024-06-10 12:33:52.852125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.852136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.852471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.852483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.852797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.852809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.853006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.853016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.853354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.853364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.853604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.853614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.853936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.853947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.854272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.854283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.854617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.854628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.854931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.854942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.855165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.855176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.855417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.855428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.855768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.855780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.855995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.856006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.856240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.856251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.856597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.856607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.856932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.856942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.857133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.857144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.857479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.857489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.857891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.857902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.858091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.858101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.858417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.858428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.858738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.858750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.859103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.859113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.859433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.859443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.859760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.859772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.860088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.860099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.860438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.860450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.860767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.860777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.861100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.861111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.861443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.537 [2024-06-10 12:33:52.861453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.537 qpair failed and we were unable to recover it. 00:29:47.537 [2024-06-10 12:33:52.861704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.861714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.861955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.861965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.862397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.862746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.862756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.863092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.863103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.863283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.863295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.863630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.863641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.863978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.863989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.864322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.864333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.864561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.864571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.864885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.864895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.865079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.865089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.865311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.865323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.865651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.865662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.865988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.865999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.866364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.866376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.866713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.866724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.867072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.867082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.867310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.867320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.867639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.867650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.868021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.868032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.868351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.868362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.868662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.868672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.868958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.868969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.869192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.869207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.869543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.869554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.869836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.869847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.870166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.870177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.870490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.870501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.870827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.870840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.871174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.871185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.871537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.871548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.871875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.871886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.872222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.872233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.872536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.872546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.872897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.872907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.873254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.873266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.538 [2024-06-10 12:33:52.873516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.538 [2024-06-10 12:33:52.873526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.538 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.873853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.873864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.874226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.874238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.874545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.874555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.874876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.874886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.875214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.875226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.875336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.875347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.875699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.875709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.876028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.876039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.876248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.876258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.876633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.876643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.877032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.877043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.877282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.877293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.877627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.877637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.877963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.877975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.878300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.878310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.878639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.878650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.878971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.878982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.879327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.879337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.879694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.879706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.880029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.880041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.880388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.880399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.880715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.880726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.880954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.880964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.881280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.881297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.881614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.881626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.881950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.881960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.882306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.882317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.882625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.882637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.882859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.882869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.883192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.883207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.883574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.883586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.883898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.539 [2024-06-10 12:33:52.883908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.539 qpair failed and we were unable to recover it. 00:29:47.539 [2024-06-10 12:33:52.884225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.884237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.884642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.884652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.884952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.884964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.885288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.885299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.885620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.885632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.885866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.885876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.886173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.886183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.886538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.886548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.886741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.886752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.886957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.886969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.887272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.887283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.887576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.887586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.887964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.887974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.888190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.888205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.888428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.888439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.888789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.888800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.889127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.889138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.889458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.889469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.889767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.889778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.890103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.890113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.890543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.890554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.890871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.890881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.891213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.891225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.891562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.891572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.891897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.891908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.892219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.892230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.892508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.892519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.892744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.892754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.893085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.893095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.893301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.893312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.893666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.893676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.894003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.894014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.894364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.894374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.894586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.894596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.894948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.894958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.895263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.895274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.895491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.895501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.895826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.895837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.540 qpair failed and we were unable to recover it. 00:29:47.540 [2024-06-10 12:33:52.896238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.540 [2024-06-10 12:33:52.896249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.896635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.896645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.896749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.896758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.897084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.897095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.897277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.897288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.897614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.897624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.897840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.897850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.898189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.898209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.898528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.898539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.898845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.898856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.899180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.899191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.899490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.899501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.899723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.899733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.900057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.900068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.900409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.900419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.900764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.900774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.900951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.900964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.901203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.901213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.901545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.901556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.901938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.901949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.902303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.902314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.902613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.902624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.902943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.902954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.903280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.903290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.903502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.903512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.903818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.903829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.904151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.904162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.904499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.904509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.904603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.904613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.904903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.904913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.905249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.905260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.905598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.905609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.905938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.905949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.906282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.906293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.906633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.907013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.907023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.907363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.907374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.907662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.907673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.541 [2024-06-10 12:33:52.907918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.541 [2024-06-10 12:33:52.907928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.541 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.908154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.908167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.908401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.908412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.908724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.908734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.908918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.908929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.909200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.909214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.909444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.909455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.909772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.909782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.910102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.910113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.910436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.910447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.910791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.910802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.910995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.911005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.911340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.911351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.911611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.911621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.911842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.911852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.912201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.912213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.912501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.912512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.912700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.912711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.912994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.913004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.913332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.913343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.913689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.913699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.914014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.914025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.914251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.914261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.914455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.914466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.914741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.914752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.914908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.914917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.915262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.915273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.915579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.915591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.915907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.915918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.916239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.916249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.916435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.916444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.542 [2024-06-10 12:33:52.916661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.542 [2024-06-10 12:33:52.916672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.542 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.916996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.917323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.917334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.917626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.917637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.917985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.917996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.918383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.918393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.918710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.918721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.919043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.919053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.919386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.919397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.919722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.919733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.920057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.920068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.920418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.920428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.920625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.920636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.921017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.921027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.921360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.921370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.921615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.921626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.921828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.921839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.922054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.922064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.922278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.922288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.922626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.922636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.922867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.922877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.923201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.923211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.923373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.923383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.923742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.923753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.924069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.924079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.924390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.924401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.924707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.924718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.925042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.925053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.925245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.925256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.925599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.925610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.925944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.925954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.926172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.926182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.926527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.926537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.926750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.926760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.927090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.927100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.927511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.927522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.927943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.927953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.928333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.928344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.543 [2024-06-10 12:33:52.928680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.543 [2024-06-10 12:33:52.928691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.543 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.929065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.929076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.929449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.929460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.929685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.929695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.930038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.930050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.930384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.930395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.930704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.930716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.931103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.931113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.931372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.931382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.931689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.931699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.931930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.931940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.932245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.932255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.932594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.932605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.932949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.932959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.933327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.933338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.933718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.933729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.934040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.934049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.934292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.934303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.934639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.934650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.934995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.935005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.935259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.935270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.935455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.935465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.935708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.935719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.936056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.936066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.936410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.936421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.936607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.936618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.936939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.936950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.937173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.937184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.937413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.937424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.937635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.937645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.937922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.937934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.938251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.938264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.938627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.938639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.938958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.938968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.939296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.939308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.939646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.939656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.939988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.939998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.940350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.940361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.544 [2024-06-10 12:33:52.940708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.544 [2024-06-10 12:33:52.940718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.544 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.941053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.941063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.941390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.941401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.941588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.941599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.941899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.941910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.942253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.942265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.942594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.942604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.942927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.942937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.943278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.943289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.943575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.943585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.943804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.943814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.944009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.944020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.944217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.944228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.944534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.944544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.944768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.944778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.945098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.945109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.945430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.945441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.945761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.945773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.946097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.946108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.946434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.946446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.946788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.946801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.947085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.947097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.947398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.947410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.947729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.947740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.948091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.948102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.948495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.948506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.948827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.948838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.949158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.949170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.949375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.949387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.949727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.949738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.950063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.950074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.950415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.950426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.950775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.950785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.951105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.951115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.951444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.951455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.951769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.951780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.952125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.952135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.952411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.952423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.952743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.545 [2024-06-10 12:33:52.952753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.545 qpair failed and we were unable to recover it. 00:29:47.545 [2024-06-10 12:33:52.953077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.953087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.953380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.953391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.953695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.953705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.954034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.954044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.954375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.954385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.954732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.954743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.955070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.955080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.955394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.955404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.955766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.955777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.956120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.956131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.956360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.956370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.956680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.956691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.957042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.957054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.957375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.957386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.957699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.957710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.958030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.958041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.958363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.958373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.958683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.958693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.959015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.959025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.959350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.959360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.959726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.959736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.960076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.960088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.960268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.960279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.960603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.960613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.960954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.960965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.961304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.961316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.961553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.961563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.961875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.961886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.962206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.962217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.962551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.962561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.962879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.962889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.963192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.963208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.963542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.963552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-06-10 12:33:52.963763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.546 [2024-06-10 12:33:52.963773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.964090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.964100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.964417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.964428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.964765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.964775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.965117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.965127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.965433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.965453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.965786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.965796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.966117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.966128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.966350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.966360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.966577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.966587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.966909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.966919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.967244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.967255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.967464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.967474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.967796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.967806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.968120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.968130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.968351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.968361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.968738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.968750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.969063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.969074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.969387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.969397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.969679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.969689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.970031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.970042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.970360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.970371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.970664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.970674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.970992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.971003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.971336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.971347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.971670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.971680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.972001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.972012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.972335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.972346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.972693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.972704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.973035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.973045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.973370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.973381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.973707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.973717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.974061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.974072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.974407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.974418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.974736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.974747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.975066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.975077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.975426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.975436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.975758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.975769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-06-10 12:33:52.975925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.547 [2024-06-10 12:33:52.975935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.976270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.976281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.976635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.976645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.976965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.976976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.977298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.977309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.977602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.977615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.977956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.977967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.978290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.978301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.978638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.978648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.978975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.978986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.979340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.979352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.979689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.979699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.980019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.980030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.980352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.980363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.980705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.980715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.981036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.981047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.981369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.981379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.981704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.981715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.982071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.982082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.982414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.982425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.982743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.982753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.983076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.983087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.983425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.983436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.983757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.983768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.984104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.984115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.984441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.984451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.984794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.984806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.985128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.985139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.985373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.985385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.985742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.985753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.986092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.986104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.986422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.986433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.986790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.986801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.987122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.987133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.987449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.987461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.987778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.987790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.988005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.988016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.988355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.988366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.988709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.988719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-06-10 12:33:52.989046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.548 [2024-06-10 12:33:52.989056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.989380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.989392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.989764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.989774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.990107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.990119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.990500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.990511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.990833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.990844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.991165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.991176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.991519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.991530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.991850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.991861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.992252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.992263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.992592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.992602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.992954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.992965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.993304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.993315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.993641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.993652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.993971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.993981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.994323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.994335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.994674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.994684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.995011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.995022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.995342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.995352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.995572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.995582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.995905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.995915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.996242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.996254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.996550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.996561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.996903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.996914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.997261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.997272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.997594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.997605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.997924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.997935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.998277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.998288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.998611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.998621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.998943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.998954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.999299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.999310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.999665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:52.999675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:52.999998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.549 [2024-06-10 12:33:53.000009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.549 qpair failed and we were unable to recover it. 00:29:47.549 [2024-06-10 12:33:53.000327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.000338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.000681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.000693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.001033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.001044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.001365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.001377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.001719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.001729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.002048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.002059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.002402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.002412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.002733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.002743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.003064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.003076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.003452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.003463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.003778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.003790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.004129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.004139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.004463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.004474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.004833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.004843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.005187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.005201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.005537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.005547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.005865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.005875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.006201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.006213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.006546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.006556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.006867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.006877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.007197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.007208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.007522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.007532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.007760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.007770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.008090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.008100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.008437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.008448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.008774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.008785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.009133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.009144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.009468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.009478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.009799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.009812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.010156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.010167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.010502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.010513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.010834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.010844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.011178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.011190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.011515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.550 [2024-06-10 12:33:53.011526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.550 qpair failed and we were unable to recover it. 00:29:47.550 [2024-06-10 12:33:53.011842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.011854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.012166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.012176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.012518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.012531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.012852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.012863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.013208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.013220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.013552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.013563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.013748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.013759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.014041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.014052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.014346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.014358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.014698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.014708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.015018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.015030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.015351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.015362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.015715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.015726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.016044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.016054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.016379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.016391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.016726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.016736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.017077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.017088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.017399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.017413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.017752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.017763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.018084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.018095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.018424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.018435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.018800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.018813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.019135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.019146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.019439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.019449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.019791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.019802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.020123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.020134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.020439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.020450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.020789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.020800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.021149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.021160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.021482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.021494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.021815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.021825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.022149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.022160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.022501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.551 [2024-06-10 12:33:53.022512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.551 qpair failed and we were unable to recover it. 00:29:47.551 [2024-06-10 12:33:53.022835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.022846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.023166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.023178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.023525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.023537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.023729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.023740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.024001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.024013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.024335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.024346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.024646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.024658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.024981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.024991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.025310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.025322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.025669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.025679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.025868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.025879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.026226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.026237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.026577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.026588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.026907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.026918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.027237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.027248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.027590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.027600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.027945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.027956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.028280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.028292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.028611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.028621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.028997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.029007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.029298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.029309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.029635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.029645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.029961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.029972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.030345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.030355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.030667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.030677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.030999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.031010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.031348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.552 [2024-06-10 12:33:53.031359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.552 qpair failed and we were unable to recover it. 00:29:47.552 [2024-06-10 12:33:53.031670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.031680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.032005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.032015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.032336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.032347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.032592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.032603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.032911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.032921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.033239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.033251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.033581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.033591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.033941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.033951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.034286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.034297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.034627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.034637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.034955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.034965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.035265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.035276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.035622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.035633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.035974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.035985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.036305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.036316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.036654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.036664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.037007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.037018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.037397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.037408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.037717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.037728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.038054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.038064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.038409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.038420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.038737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.038747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.039071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.039081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.039383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.039393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.039685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.039696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.040016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.040027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.040389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.040400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.040725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.040735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.041090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.041437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.041450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.041782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.042121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.042132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.042453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.042771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.042782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.043104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.043115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.043444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.043456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.553 [2024-06-10 12:33:53.043806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.553 [2024-06-10 12:33:53.043817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.553 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.044179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.044190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.044512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.044523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.044838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.044849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.045192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.045208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.045399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.045410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.045742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.045753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.046077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.046089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.046419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.046430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.046751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.046762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.047085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.047095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.047314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.047325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.047668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.047679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.047990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.048001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.048186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.048203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.048481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.048491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.048837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.048847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.049198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.049494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.049504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.049825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.049836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.050041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.050053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.050377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.050387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.050712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.050723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.050919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.050930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.051237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.051248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.051578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.051588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.051909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.051919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.052252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.052581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.052592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.052912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.052922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.053267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.053278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.053612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.053622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.053966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.053976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.054297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.054309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.054654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.054665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.055005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.055015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.055362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.055373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.055697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.055707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.056034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.554 [2024-06-10 12:33:53.056044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.554 qpair failed and we were unable to recover it. 00:29:47.554 [2024-06-10 12:33:53.056388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.056399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.056746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.056756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.057114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.057124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.057465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.057475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.057816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.057828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.058023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.058035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.058376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.058387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.058710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.058720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.059044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.059054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.059396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.059407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.059811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.059821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.060137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.060147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.060473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.060484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.060831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.060841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.061183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.061198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.061499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.061510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.061806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.061816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.062129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.062141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.062459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.062470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.062787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.062798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.063111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.063123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.063438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.063449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.063768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.063779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.064099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.064110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.064377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.064387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.064689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.064701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.065028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.065039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.065362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.065373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.065688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.065699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.066008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.066018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.066340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.066352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.066678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.066688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.066988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.067000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.067342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.067353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.067668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.067678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.068051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.068062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.068409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.068421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.555 qpair failed and we were unable to recover it. 00:29:47.555 [2024-06-10 12:33:53.068739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.555 [2024-06-10 12:33:53.068751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.069069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.069079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.069423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.069434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.069754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.069765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.070107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.070117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.070442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.070453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.070790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.070801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.071129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.071140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.071471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.071481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.071790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.071802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.072175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.072186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.072520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.072531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.072875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.072888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.073208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.073219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.073577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.073587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.073925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.073936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.074245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.074256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.074577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.074588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.074938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.074948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.075259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.075271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.075582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.075592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.075913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.075923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.076114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.076125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.076400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.076411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.076758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.076769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.077129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.077139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.077462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.077474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.077747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.077758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.078103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.078113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.078437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.078448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.078771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.078782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.079081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.556 [2024-06-10 12:33:53.079091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.556 qpair failed and we were unable to recover it. 00:29:47.556 [2024-06-10 12:33:53.079420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.079432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.079753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.079764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.080063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.080074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.080398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.080409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.080748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.080758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.081079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.081090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.081384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.081394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.081721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.081734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.081921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.081932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.082273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.082284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.082616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.082627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.082829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.082840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.083158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.083169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.083491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.083501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.083731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.083741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.084065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.084075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.084412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.084424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.084742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.084753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.085076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.085086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.085433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.085444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.085785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.085795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.086120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.086132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.086460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.086471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.086792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.086803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.087146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.087157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.087552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.087563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.087876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.087887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.088209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.088221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.088414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.088425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.088763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.088773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.089084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.089096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.089468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.089479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.089819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.089829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.090015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.090026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.090410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.090423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.090741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.090753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.091094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.091105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.091443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.091454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.557 [2024-06-10 12:33:53.091705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.557 [2024-06-10 12:33:53.091715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.557 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.092038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.092048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.092394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.092405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.092741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.092753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.093071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.093080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.093408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.093420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.093764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.093775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.094085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.094096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.094444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.094454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.094863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.094874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.095189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.095204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.095546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.095556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.095874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.095885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.096210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.096221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.096542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.096552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.096878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.096890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.097214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.097225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.097547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.097558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.097903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.097914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.098233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.098245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.098549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.098560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.098881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.098892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.099240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.099251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.099578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.099589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.099904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.099915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.100236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.100247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.100540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.100551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.100908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.100920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.101207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.101219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.101401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.101412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.101762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.101774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.102115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.102126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.102456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.102467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.102657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.102668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.102880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.102892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.103186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.103199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.103545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.103555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.103875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.103887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.558 [2024-06-10 12:33:53.104186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.558 [2024-06-10 12:33:53.104202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.558 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.104503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.104514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.104878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.104888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.105251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.105262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.105603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.105614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.105937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.105947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.106275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.106285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.106611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.106622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.106968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.106979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.107317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.107328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.107653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.107663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.107992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.108003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.108347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.108357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.108658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.108669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.108988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.108999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.109327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.109338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.109683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.109693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.110032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.110042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.110362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.110374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.110695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.110705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.111046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.111056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.111378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.111388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.111718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.111728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.112053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.112064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.112258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.112269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.112612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.112623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.112942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.112956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.113276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.113288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.113631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.113642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.114000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.114011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.114328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.114340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.114561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.114571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.114912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.114923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.115242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.115252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.115585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.115595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.115925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.115935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.116243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.116255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.559 qpair failed and we were unable to recover it. 00:29:47.559 [2024-06-10 12:33:53.116578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.559 [2024-06-10 12:33:53.116588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.116901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.116913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.117233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.117244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.117588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.117598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.117920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.117930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.118256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.118267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.118586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.118595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.118944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.118954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.119253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.119265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.119584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.119594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.119921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.119931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.120279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.120290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.120603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.120613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.120937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.120948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.560 [2024-06-10 12:33:53.121268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.560 [2024-06-10 12:33:53.121279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.560 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.121623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.121635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.121956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.121969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.122354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.122365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.122695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.122706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.123045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.123056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.123447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.123458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.123768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.123778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.124100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.124110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.124426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.124438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.124724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.124734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.835 qpair failed and we were unable to recover it. 00:29:47.835 [2024-06-10 12:33:53.125058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.835 [2024-06-10 12:33:53.125069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.125414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.125425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.125655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.125665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.125976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.125986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.126309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.126321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.126705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.126716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.127061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.127072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.127407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.127417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.127741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.127753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.128073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.128083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.128403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.128414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.128724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.128735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.129104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.129115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.129430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.129441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.129783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.129793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.130117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.130128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.130451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.130461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.130651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.130662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.131012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.131022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.131351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.131362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.131681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.131693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.132017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.132028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.132382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.132393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.132714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.132725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.133047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.133059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.133375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.133386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.133731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.133742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.134053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.134064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.134377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.134388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.134699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.134711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.135016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.135026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.135224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.135235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.135442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.135453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.135780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.135791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.136082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.136093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.136331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.836 [2024-06-10 12:33:53.136342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.836 qpair failed and we were unable to recover it. 00:29:47.836 [2024-06-10 12:33:53.136670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.136682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.137000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.137011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.137357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.137367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.137686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.137697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.138018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.138029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.138362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.138372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.138727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.138737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.139089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.139099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.139402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.139413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.139745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.139755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.140099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.140109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.140339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.140349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.140680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.140691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.141013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.141024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.141324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.141334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.141664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.141676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.142073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.142083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.142379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.142391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.142698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.142708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.143024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.143036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.143363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.143374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.143695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.143706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.144037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.144049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.144369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.144382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.144689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.144700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.145012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.145022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.145381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.145392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.145711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.145722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.146045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.146055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.146371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.146382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.146571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.146581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.146880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.146891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.147244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.147255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.147557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.147568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.147908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.147919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.148239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.148249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.148479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.148489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.148817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.837 [2024-06-10 12:33:53.148828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.837 qpair failed and we were unable to recover it. 00:29:47.837 [2024-06-10 12:33:53.149172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.149182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.149543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.149554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.149913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.149924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.150315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.150326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.150557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.150567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.150899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.150910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.151232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.151243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.151569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.151579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.151931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.151942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.152253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.152264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.152601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.152611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.152934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.152944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.153289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.153303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.153512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.153523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.153854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.153864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.154076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.154086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.154409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.154419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.154733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.154743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.155058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.155069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.155259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.155270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.155617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.155628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.155948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.155959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.156272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.156282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.156591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.156602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.156953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.156963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.157272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.157284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.157596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.157607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.157928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.157939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.158284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.158296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.158615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.158625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.158947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.158958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.159281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.159291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.159640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.159650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.159971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.838 [2024-06-10 12:33:53.159983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.838 qpair failed and we were unable to recover it. 00:29:47.838 [2024-06-10 12:33:53.160342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.160352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.160672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.160683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.161036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.161047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.161366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.161377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.161717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.161727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.162053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.162065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.162409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.162420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.162737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.162747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.163041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.163053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.163333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.163343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.163597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.163607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.163924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.163935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.164251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.164261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.164583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.164594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.164939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.164949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.165270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.165282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.165605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.165616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.165974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.165985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.166292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.166302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.166633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.166645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.166985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.166996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.167317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.167329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.167714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.167725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.167994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.168004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.168327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.168337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.168656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.168666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.168856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.168867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.169201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.169212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.169557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.169568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.169755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.169765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.170065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.170077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.170341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.170352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.170673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.170684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.171006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.171017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.171368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.171378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.171700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.171711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.172055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.172066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.172388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.172398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.839 [2024-06-10 12:33:53.172740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.839 [2024-06-10 12:33:53.172751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.839 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.173062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.173073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.173408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.173419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.173741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.173752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.174098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.174108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.174446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.174457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.174752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.174763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.175078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.175089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.175412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.175425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.175747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.175758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.176080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.176091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.176300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.176311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.176641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.176652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.176967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.176978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.177171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.177183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.177403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.177414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.177770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.177782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.178100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.178111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.178447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.178457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.178778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.178789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.179128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.179139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.179464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.179476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.179809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.179819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.180151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.180162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.180519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.180531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.180829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.180839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.181159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.181170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.181486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.181497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.181843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.181853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.182174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.182185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.182509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.182520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.182826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.840 [2024-06-10 12:33:53.182837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.840 qpair failed and we were unable to recover it. 00:29:47.840 [2024-06-10 12:33:53.183184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.183200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.183430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.183440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.183761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.183772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.184095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.184108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.184424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.184435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.184755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.184765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.185086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.185097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.185444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.185455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.185756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.185767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.186084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.186094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.186429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.186441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.186762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.186772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.187066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.187077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.187416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.187427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.187820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.187830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.188121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.188131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.188427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.188438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.188834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.188845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.189155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.189166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.189479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.189490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.189834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.189845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.190166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.190177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.190499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.190510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.190830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.190841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.191181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.191192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.191506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.191518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.191838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.191849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.192167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.192178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.192477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.192488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.192703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.192714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.193033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.193048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.193416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.193427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.193743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.193753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.194066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.194077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.194291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.194301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.194637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.194647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.194995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.195006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.195323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.195333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.841 [2024-06-10 12:33:53.195694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.841 [2024-06-10 12:33:53.195705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.841 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.196027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.196037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.196385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.196396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.196715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.196726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.197087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.197097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.197440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.197451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.197793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.197804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.198127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.198137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.198472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.198483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.198713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.198723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.199028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.199038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.199358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.199369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.199654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.199665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.199984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.199996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.200340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.200351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.200668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.200679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.200999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.201009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.201334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.201345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.201655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.201666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.201986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.201997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.202367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.202378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.202717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.202727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.203069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.203080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.203395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.203406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.203729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.203739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.204058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.204068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.204402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.204413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.204787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.204798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.205121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.205132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.205445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.205457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.205660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.205671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.206043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.206053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.206379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.206389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.206715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.206725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.207017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.207028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.207347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.207357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.207681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.207691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.208050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.208061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.208399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.208409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.842 qpair failed and we were unable to recover it. 00:29:47.842 [2024-06-10 12:33:53.208745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.842 [2024-06-10 12:33:53.208755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.209052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.209064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.209259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.209269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.209617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.209628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.209991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.210002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.210269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.210280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.210618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.210629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.210976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.210986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.211312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.211324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.211554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.211564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.211977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.211987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.212296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.212308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.212634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.212645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.212969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.212980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.213286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.213297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.213644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.213655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.213974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.213985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.214304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.214314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.214531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.214541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.214857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.214868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.215180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.215190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.215517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.215530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.215856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.215866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.216212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.216223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.216559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.216569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.216961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.216971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.217289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.217301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.217616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.217626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.217984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.217995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.218180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.218191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.218398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.218411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.218715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.218725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.219047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.219057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.219381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.219392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.219584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.219595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.219961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.219972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.220315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.220326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.220650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.220662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.843 [2024-06-10 12:33:53.220984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.843 [2024-06-10 12:33:53.220995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.843 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.221291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.221302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.221621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.221632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.221953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.221963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.222285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.222296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.222608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.222619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.222940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.222950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.223281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.223292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.223614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.223626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.223964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.223974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.224308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.224321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.224652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.224663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.224983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.224993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.225340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.225352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.225691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.225702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.226023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.226033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.226362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.226373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.226721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.226732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.227048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.227058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.227378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.227389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.227723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.227734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.228080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.228091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.228427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.228437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.228766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.228777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.229104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.229116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.229447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.229458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.229778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.229789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.230110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.230121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.230454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.230466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.230812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.230823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.231109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.231120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.231441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.231452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.231681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.231694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.232015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.232026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.232491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.232501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.232828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.232839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.233027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.233037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.233224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.233238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.233571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.844 [2024-06-10 12:33:53.233582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.844 qpair failed and we were unable to recover it. 00:29:47.844 [2024-06-10 12:33:53.233900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.233913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.234255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.234267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.234614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.234625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.234936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.234947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.235270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.235281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.235481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.235491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.235790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.235801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.236123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.236133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.236455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.236466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.236802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.236813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.237157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.237168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.237505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.237516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.237811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.237821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.238152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.238162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.238505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.238516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.238845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.238857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.239177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.239188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.239375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.239386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.239684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.239695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.240016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.240027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.240344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.240355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.240692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.240703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.241008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.241019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.241248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.241259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.241579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.241589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.241962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.241973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.242313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.242324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.242658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.242669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.242990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.243001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.243320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.243332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.243678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.243688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.244012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.244022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.244352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.244364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.845 qpair failed and we were unable to recover it. 00:29:47.845 [2024-06-10 12:33:53.244715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.845 [2024-06-10 12:33:53.244725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.245078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.245090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.245373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.245384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.245710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.245721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.246063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.246073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.246329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.246340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.246680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.246693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.246987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.246999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.247346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.247357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.247702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.247713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.248028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.248038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.248361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.248373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.248744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.248754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.249062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.249073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.249421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.249432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.249772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.249783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.250180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.250191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.250507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.250517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.250921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.250931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.251251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.251262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.251591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.251602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.251939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.251950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.252547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.252565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.252896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.252907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.253212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.253223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.253570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.253580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.253867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.253878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.254090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.254101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.254475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.254486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.254795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.254807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.255064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.255075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.255377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.255388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.255726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.255737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.256079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.256096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.256435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.256446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.256750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.256760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.257087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.257097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.257377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.257389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.846 [2024-06-10 12:33:53.257715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.846 [2024-06-10 12:33:53.257726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.846 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.258048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.258059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.258413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.258425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.258610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.258622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.258931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.258942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.259258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.259268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.259598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.259609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.259949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.259959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.260285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.260295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.260596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.260606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.260927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.260937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.261249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.261261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.261567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.261577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.261896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.261907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.262224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.262235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.262553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.262563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.262883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.262894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.263218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.263229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.263531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.263541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.263749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.263760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.264085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.264096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.264419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.264429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.264712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.264724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.264931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.264941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.265234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.265245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.265568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.265579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.265892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.265902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.266279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.266290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.266522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.266532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.266865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.266875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.267092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.267102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.267418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.267429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.267749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.267760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.268085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.268096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.268352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.268362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.268686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.268697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.268857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.847 [2024-06-10 12:33:53.268869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.847 qpair failed and we were unable to recover it. 00:29:47.847 [2024-06-10 12:33:53.269211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.269223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.269416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.269426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.269732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.269742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.270074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.270086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.270431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.270442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.270763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.270773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.271119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.271130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.271445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.271457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.271829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.271839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.272159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.272170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.272358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.272369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.272691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.272703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.273073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.273083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.273426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.273437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.273769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.273780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.274100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.274110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.274335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.274346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.274649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.274659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.274862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.274873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.275050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.275062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.275367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.275378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.275701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.275712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.275905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.275916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.276212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.276222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.276569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.276579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.276923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.276933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.277278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.277290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.277608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.277619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.277934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.277944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.278240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.278251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.278573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.278584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.278901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.278912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.279227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.279237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.279536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.279546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.279854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.279864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.280208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.280219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.280559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.280569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.848 [2024-06-10 12:33:53.280881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.848 [2024-06-10 12:33:53.280892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.848 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.281235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.281246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.281377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.281387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.281698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.281708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.282037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.282048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.282387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.282398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.282722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.282732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.283093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.283104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.283447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.283458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.283803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.283814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.284133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.284145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.284466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.284476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.284796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.284807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.285151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.285162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.285481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.285492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.285815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.285825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.286149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.286163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.286507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.286517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.286838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.286849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.287170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.287182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.287489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.287499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.287843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.287855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.288180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.288191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.288521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.288531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.288854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.288865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.289217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.289229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.289550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.289560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.289952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.289962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.290191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.290206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.290536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.290546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.290914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.290925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.291244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.849 [2024-06-10 12:33:53.291255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.849 qpair failed and we were unable to recover it. 00:29:47.849 [2024-06-10 12:33:53.291565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.291575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.291922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.291932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.292247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.292258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.292574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.292585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.292897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.292909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.293255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.293266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.293586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.293597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.293906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.293918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.294236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.294247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.294546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.294555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.294873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.294883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.295206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.295220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.295408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.295419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.295746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.295756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.296081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.296092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.296430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.296441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.296761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.296771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.297080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.297092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.297391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.297402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.297802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.297813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.297997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.298008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.298318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.298329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.298706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.298717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.299027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.299037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.299380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.299391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.299707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.299718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.300056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.300068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.300377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.300388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.300723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.300734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.301074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.301087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.301464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.301475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.301798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.301809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.302042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.302052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.302378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.302388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.302728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.302738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.303060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.303072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.303457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.303467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.303769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.850 [2024-06-10 12:33:53.303780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.850 qpair failed and we were unable to recover it. 00:29:47.850 [2024-06-10 12:33:53.304106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.304118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.304456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.304467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.304828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.304839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.305183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.305198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.305546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.305557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.305879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.305890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.306217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.306227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.306543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.306555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.306874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.306884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.307206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.307218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.307509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.307519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.307747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.307757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.308079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.308089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.308433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.308443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.308751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.308762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.309106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.309116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.309437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.309448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.309771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.309781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.310108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.310118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.310434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.310445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.310764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.310775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.311168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.311178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.311511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.311522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.311866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.311877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.312202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.312213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.312600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.312610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.312921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.312932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.313120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.313131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.313442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.313454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.313773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.313784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.314104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.314115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.314386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.314397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.314715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.314726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.314983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.314993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.315315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.315326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.315675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.315686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.851 [2024-06-10 12:33:53.316016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.851 [2024-06-10 12:33:53.316028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.851 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.316347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.316358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.316696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.316707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.317047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.317058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.317320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.317331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.317647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.317659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.317981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.317992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.318340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.318351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.318696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.318706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.319028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.319038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.319355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.319365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.319707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.319717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.319929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.319939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.320350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.320361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.320590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.320600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.320910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.320920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.321243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.321254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.321575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.321585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.321910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.321921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.322265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.322276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.322597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.322608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.322830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.322840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.323154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.323164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.323505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.323517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.323835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.323846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.324164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.324175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.324374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.324386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.324677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.324688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.325011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.325021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.325342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.325353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.325680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.325691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.326041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.326051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.326395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.326408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.326632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.326642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.326960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.326970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.327279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.327290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.327608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.327619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.327936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.852 [2024-06-10 12:33:53.327947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.852 qpair failed and we were unable to recover it. 00:29:47.852 [2024-06-10 12:33:53.328270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.328280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.328593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.328603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.328926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.328936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.329253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.329265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.329426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.329437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.329736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.329748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.330068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.330078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.330391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.330402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.330735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.330745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.331124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.331135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.331317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.331328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.331645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.331655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.331979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.331989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.332334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.332346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.332680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.332691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.333011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.333021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.333360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.333372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.333720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.333731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.334050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.334061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.334297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.334308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.334615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.334626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.334965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.334978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.335324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.335335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.335655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.335667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.335889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.335899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.336208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.336219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.336535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.336546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.336870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.336881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.337208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.337219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.337577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.337588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.337907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.337919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.338239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.338250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.338576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.338587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.338936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.338946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.339267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.339277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.339656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.339666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.339985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.339997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.853 qpair failed and we were unable to recover it. 00:29:47.853 [2024-06-10 12:33:53.340304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.853 [2024-06-10 12:33:53.340315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.340609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.340620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.340944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.340955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.341271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.341283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.341598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.341609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.341932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.341942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.342266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.342278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.342617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.342627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.343001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.343011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.343324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.343335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.343670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.343680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.344075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.344086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.344399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.344410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.344768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.344778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.345098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.345108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.345451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.345462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.345751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.345761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.346096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.346107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.346423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.346434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.346751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.346762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.347108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.347118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.347505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.347516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.347706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.347717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.347911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.347922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.348238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.348248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.348588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.348904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.348915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.349237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.349247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.349556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.349567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.349884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.349895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.350204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.350214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.350554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.350564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.854 [2024-06-10 12:33:53.350908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.854 [2024-06-10 12:33:53.350918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.854 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.351242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.351254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.351574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.351584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.351927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.351937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.352279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.352291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.352611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.352621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.352947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.352958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.353285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.353297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.353613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.353623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.353946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.353957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.354322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.354332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.354673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.354683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.355028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.355038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.355357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.355377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.355709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.355720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.356039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.356050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.356388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.356400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.356721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.356731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.357049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.357060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.357408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.357419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.357762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.357775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.358092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.358103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.358444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.358455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.358777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.358787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.359127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.359138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.359473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.359484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.359739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.359749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.359979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.359990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.360295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.360305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.360612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.360624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.360998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.361009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.361335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.361347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.361697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.361707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.362026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.362036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.362422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.362433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.362743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.362755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.363091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.363102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.363439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.363449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.363770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.363782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.855 qpair failed and we were unable to recover it. 00:29:47.855 [2024-06-10 12:33:53.363988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.855 [2024-06-10 12:33:53.363999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.364342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.364353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.364544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.364554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.364792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.364804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.365124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.365134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.365408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.365420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.365643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.365653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.365971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.365982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.366306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.366319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.366662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.366672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.366995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.367005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.367330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.367342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.367682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.367692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.368001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.368012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.368325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.368336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.368672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.368683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.368902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.368913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.369217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.369229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.369406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.369418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.369711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.369721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.370064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.370074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.370430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.370441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.370732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.370743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.370901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.370911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.371142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.371152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.371488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.371500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.371792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.371803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.372113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.372123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.372516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.372529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.372843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.372855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.373106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.373117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.373433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.373443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.373798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.373810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.374160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.374171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.374493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.374505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.374825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.374836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.375160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.375171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.856 qpair failed and we were unable to recover it. 00:29:47.856 [2024-06-10 12:33:53.375517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.856 [2024-06-10 12:33:53.375528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.375888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.375899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.376219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.376230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.376567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.376578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.376919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.376930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.377294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.377305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.377636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.377648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.377971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.377982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.378329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.378340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.378673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.378685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.378890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.378900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.379220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.379231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.379546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.379557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.379827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.379837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.380155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.380166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.380479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.380489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.380793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.380804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.381131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.381142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.381482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.381493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.381847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.381858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.382184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.382208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.382348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.382358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.382659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.382670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.382984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.382995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.383343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.383354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.383670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.383682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.383995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.384006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.384328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.384339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.384690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.384701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.385023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.385034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.385357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.385369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.385699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.385710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.386051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.386061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.386348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.386359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.386687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.386697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.387018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.387028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.387373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.387384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.857 [2024-06-10 12:33:53.387702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.857 [2024-06-10 12:33:53.387713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.857 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.388034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.388045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.388366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.388380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.388727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.388737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.389102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.389113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.389433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.389444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.389763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.389774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.390115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.390126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.390443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.390454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.390773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.390784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.391110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.391120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.391470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.391481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.391804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.391815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.392137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.392148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.392470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.392481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.392825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.392835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.393172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.393182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.393506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.393517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.393841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.393852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.394198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.394209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.394546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.394556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.394950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.394961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.395281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.395293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.395646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.395656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.395980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.395991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.396306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.396316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.396658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.396668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.397012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.397022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.397344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.397355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.397671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.397684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.398005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.398015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.398356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.398368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.398706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.398716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.399034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.399045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.399364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.399375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.399716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.399726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.400021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.400033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.858 [2024-06-10 12:33:53.400362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.858 [2024-06-10 12:33:53.400372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.858 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.400714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.400725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.401070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.401081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.401396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.401407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.401742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.401752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.402099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.402109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.402453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.402464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.402782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.402793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.403132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.403144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.403440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.403451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.403800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.403810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.404134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.404145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.404482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.404493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.404813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.404823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.405169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.405180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.405503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.405514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.405835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.405846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.406160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.406170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.406512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.406522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.406845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.406859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.407190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.407205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.407510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.407520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.407712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.407722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.408040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.408050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.408372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.408384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.408723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.408734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.409074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.409084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.409430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.409441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.409761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.409772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.410084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.410094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.410430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.410441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.410764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.410775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.411094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.411104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.859 qpair failed and we were unable to recover it. 00:29:47.859 [2024-06-10 12:33:53.411434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.859 [2024-06-10 12:33:53.411444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.411787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.411798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.412125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.412136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.412458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.412469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.412789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.412800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.413143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.413154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.413432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.413443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.413798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.413809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.414129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.414140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.414454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.414466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.414783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.414794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.415121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.415132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.415461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.415472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.415662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.415674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.415968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.415979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.416302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.416312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.416651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.416662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.416849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.416860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.417202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.417213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.417563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.417574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.417798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.417807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.418077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.418087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.418402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.418412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.418715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.418726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.419039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.419049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.419383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.419393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.419716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.419726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.420047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.420058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.420378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.420389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.420676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.420686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.421018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.421030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.421352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.421362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.421701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.421711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.422051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.422062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.422385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.422720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.422731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.423057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.423068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.860 [2024-06-10 12:33:53.423407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.860 [2024-06-10 12:33:53.423418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.860 qpair failed and we were unable to recover it. 00:29:47.861 [2024-06-10 12:33:53.423730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.861 [2024-06-10 12:33:53.423740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.861 qpair failed and we were unable to recover it. 00:29:47.861 [2024-06-10 12:33:53.424049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.861 [2024-06-10 12:33:53.424059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.861 qpair failed and we were unable to recover it. 00:29:47.861 [2024-06-10 12:33:53.424403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.861 [2024-06-10 12:33:53.424413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.861 qpair failed and we were unable to recover it. 00:29:47.861 [2024-06-10 12:33:53.424739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.861 [2024-06-10 12:33:53.424750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:47.861 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.425067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.425080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.425418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.425429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.425617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.425628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.426000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.426011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.426372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.426383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.426709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.426719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.427037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.427048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.427386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.427396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.427674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.427686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.428006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.428017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.428339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.428351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.428704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.428714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.429077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.429090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.429403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.429414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.429737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.429747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.430089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.430100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.430438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.430449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.430768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.430780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.431198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.431210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.431531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.431541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.431863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.431874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.432065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.432076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.432366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.432378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.432619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.432629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.432940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.432950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.433271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.433284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.433618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.433628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.433980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.433990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.434224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.135 [2024-06-10 12:33:53.434235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.135 qpair failed and we were unable to recover it. 00:29:48.135 [2024-06-10 12:33:53.434543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.434553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.434894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.434906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.435247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.435259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.435575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.435586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.435908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.435918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.436231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.436243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.436476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.436487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.436715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.436725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.437034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.437044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.437370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.437382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.437728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.437741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.438059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.438070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.438351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.438362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.438739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.438750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.439091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.439102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.439435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.439446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.439771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.439782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.440075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.440085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.440399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.440410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.440735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.440746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.441084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.441095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.441429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.441440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.441783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.441794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.442080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.442090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.442432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.442443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.442764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.442775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.443108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.443119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.443449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.443459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.443778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.443788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.444111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.444122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.444437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.444448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.444676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.444687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.445015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.445025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.445352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.445363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.445577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.445587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.445840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.445850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.136 [2024-06-10 12:33:53.446244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.136 [2024-06-10 12:33:53.446256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.136 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.446594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.446606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.446953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.446964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.447339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.447350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.447689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.447701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.448022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.448032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.448373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.448385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.448705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.448716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.448930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.448940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.449261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.449272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.449498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.449508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.449898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.449908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.450226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.450238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.450572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.450583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.450924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.450934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.451238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.451249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.451634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.451644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.451966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.451977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.452171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.452182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.452401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.452412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.452756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.452767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.453090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.453101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.453445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.453456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.453808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.453819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.454152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.454163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.454492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.454503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.454848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.454858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.455049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.455061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.455377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.455388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.455723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.455734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.456081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.456091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.456326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.456336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.456659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.456669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.457021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.457032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.457373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.457385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.457744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.457754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.457939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.457949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.458274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.458285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.458590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.137 [2024-06-10 12:33:53.458600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.137 qpair failed and we were unable to recover it. 00:29:48.137 [2024-06-10 12:33:53.458923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.458933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.459256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.459268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.459605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.459615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.459962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.459975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.460297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.460308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.460643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.460654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.460975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.460986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.461324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.461335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.461665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.461676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.461997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.462007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.462349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.462360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.462706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.462716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.463074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.463084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.463423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.463433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.463729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.463740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.464063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.464073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.464417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.464428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.464751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.464762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.465084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.465094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.465384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.465395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.465726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.465736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.465890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.465900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.466234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.466245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.466555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.466567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.466878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.466889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.467211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.467221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.467540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.467550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.467893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.467904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.468096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.468106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.468299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.468310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.468609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.468621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.468962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.468973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.469202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.469213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.469507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.469518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.469840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.469850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.470140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.470150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.470474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.470485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.470788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.138 [2024-06-10 12:33:53.470800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.138 qpair failed and we were unable to recover it. 00:29:48.138 [2024-06-10 12:33:53.471111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.471121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.471432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.471443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.471777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.471788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.472109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.472121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.472399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.472409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.472721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.472733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.473048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.473059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.473373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.473385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.473700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.473710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.474070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.474081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.474423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.474435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.474757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.474767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.475089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.475101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.475421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.475431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.475752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.475763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.476088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.476100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.476479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.476490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.476829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.476840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.477133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.477144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.477335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.477349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.477625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.477636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.477946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.477957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.478181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.478193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.478516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.478528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.478850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.478861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.479217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.479228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.479562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.479572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.479901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.479912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.480235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.480245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.480567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.480577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.480915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.480926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.481264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.139 [2024-06-10 12:33:53.481276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.139 qpair failed and we were unable to recover it. 00:29:48.139 [2024-06-10 12:33:53.481616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.481627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.481938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.481949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.482182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.482192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.482493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.482504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.482792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.482803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.483130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.483140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.483460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.483471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.483764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.483774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.484091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.484101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.484458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.484470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.484793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.484803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.485020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.485030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.485349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.485360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.485708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.485720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.486039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.486050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.486373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.486384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.486678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.486689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.487008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.487019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.487338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.487350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.487671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.487680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.488003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.488014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.488356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.488368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.488689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.488699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.489019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.489029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.489343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.489354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.489664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.489674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.489994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.490004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.490324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.490334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.490674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.490685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.490875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.490885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.491225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.491237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.491569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.491580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.491921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.491932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.492274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.492284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.492589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.492599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.492960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.492971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.493283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.493293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.493607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.140 [2024-06-10 12:33:53.493618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.140 qpair failed and we were unable to recover it. 00:29:48.140 [2024-06-10 12:33:53.493937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.493947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.494308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.494319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.494628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.494639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.494835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.494845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.495045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.495056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.495227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.495239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.495595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.495605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.495944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.495955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.496268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.496280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.496589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.496599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.496911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.496923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.497214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.497225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.497588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.497599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.497990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.498001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.498321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.498332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.498678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.498688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.499009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.499020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.499343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.499356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.499695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.499706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.500052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.500063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.500373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.500385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.500721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.500731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.501053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.501063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.501402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.501413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.501732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.501742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.502064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.502075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.502407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.502418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.502713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.502723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.503057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.503068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.503390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.503400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.503728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.503738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.504079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.504090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.504430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.504441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.504795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.504806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.505147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.505158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.505363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.505374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.505691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.505703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.506025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.506036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.506359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.141 [2024-06-10 12:33:53.506369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.141 qpair failed and we were unable to recover it. 00:29:48.141 [2024-06-10 12:33:53.506721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.506731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.507048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.507059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.507374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.507385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.507721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.507731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.508071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.508082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.508434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.508446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.508759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.508769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.508960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.508970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.509271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.509282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.509599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.509611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.509938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.509948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.510260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.510271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.510611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.510621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.510934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.510945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.511269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.511280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.511614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.511624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.511928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.511939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.512275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.512286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.512602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.512613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.512799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.512810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.513113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.513125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.513447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.513458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.513772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.513783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.514117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.514128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.514467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.514478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.514664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.514675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.515008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.515018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.515348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.515359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.515699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.515711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.516046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.516056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.516373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.516384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.516702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.516713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.517000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.142 [2024-06-10 12:33:53.517011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.142 qpair failed and we were unable to recover it. 00:29:48.142 [2024-06-10 12:33:53.517314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.517325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.517622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.517633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.517969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.517980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.518337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.518348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.518657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.518668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.518993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.519004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.519415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.519427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.519734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.519744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.520096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.520107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.520475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.520486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.520826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.520836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.521209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.521220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.521551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.521561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.521845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.521856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.522181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.522192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.522511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.522522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.522848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.522859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.523184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.523200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.523512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.523522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.523865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.523875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.524203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.524214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.524565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.524575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.524761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.524772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.525087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.525098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.525441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.525452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.525774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.525785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.526126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.526136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.526473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.526484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.526801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.526811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.527131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.527141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.527463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.527473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.527818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.527829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.528155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.528166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.528418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.528429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.528756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.528766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.529105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.529116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.529436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.529447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.529771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.529783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.530095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.143 [2024-06-10 12:33:53.530106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.143 qpair failed and we were unable to recover it. 00:29:48.143 [2024-06-10 12:33:53.530415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.530425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.530699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.530712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.531027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.531037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.531352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.531364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.531716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.531727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.532079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.532090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.532318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.532328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.532627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.532637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.532959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.532970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.533389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.533400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.533727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.533738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.534041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.534052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.534396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.534407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.534728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.534738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.535059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.535069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.535413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.535425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.535775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.535785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.536104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.536115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.536319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.536330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.536642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.536652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.536993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.537004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.537319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.537330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.537648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.537658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.537976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.537986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.538174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.538185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.538528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.538539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.538862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.538873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.539228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.539240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.539566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.539579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.539763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.539773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.540069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.540080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.540418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.540429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.144 [2024-06-10 12:33:53.540616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.144 [2024-06-10 12:33:53.540627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.144 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.540849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.540860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.541179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.541189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.541514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.541525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.541870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.541881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.542207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.542219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.542561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.542571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.542892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.542902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.543254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.543265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.543548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.543559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.543883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.543893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.544220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.544231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.544565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.544576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.544898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.544908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.545270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.545281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.545672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.545682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.546071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.546083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.546384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.546395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.546727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.546739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.547053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.547064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.547401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.547411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.547747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.547758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.547948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.547959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.548245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.548258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.548595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.548606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.548917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.548928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.549251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.549262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.549476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.549486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.549767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.549778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.550098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.550109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.550492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.550503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.550821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.550832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.551118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.551128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.551451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.551462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.551784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.551795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.552025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.552035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.552347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.552357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.552701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.552712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.145 [2024-06-10 12:33:53.552898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.145 [2024-06-10 12:33:53.552909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.145 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.553242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.553252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.553592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.553604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.553926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.553937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.554262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.554274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.554614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.554624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.554914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.554925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.555247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.555258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.555581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.555592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.555905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.555917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.556105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.556116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.556444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.556454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.556771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.556782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.557105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.557115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.557435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.557446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.557634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.557645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.557965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.557977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.558348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.558359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.558698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.558709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.558902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.558914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.559208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.559219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.559535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.559546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.559890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.559900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.560220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.560232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.560572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.560582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.560904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.560914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.561277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.561288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.561587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.561598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.561935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.561946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.562274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.562285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.562583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.562594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.562918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.562928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.563245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.563257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.563574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.563584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.563937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.563948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.564267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.564279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.564601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.564611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.564953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.564963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.565351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.565363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.146 qpair failed and we were unable to recover it. 00:29:48.146 [2024-06-10 12:33:53.565680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.146 [2024-06-10 12:33:53.565691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.566056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.566067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.566375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.566386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.566699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.566709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.567032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.567043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.567211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.567223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.567541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.567552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.567893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.567904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.568226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.568237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.568572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.568583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.568875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.568885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.569081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.569091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.569447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.569458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.569790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.569801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.570121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.570134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.570446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.570457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.570778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.570789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.571123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.571134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.571481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.571491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.571825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.571836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.572053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.572064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.572304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.572314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.572633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.572643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.572997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.573312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.573323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.573657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.573668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.573993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.574004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.574346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.574357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.574665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.574675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.574860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.574872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.575208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.575219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.575544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.575555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.575866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.575878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.576217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.576227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.576501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.576511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.576842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.576852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.577169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.577180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.577487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.577506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.147 [2024-06-10 12:33:53.577836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.147 [2024-06-10 12:33:53.577846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.147 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.578197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.578210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.578531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.578541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.578858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.578871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.579192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.579207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.579546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.579556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.579877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.579888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.580206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.580216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.580579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.580590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.580932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.580943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.581267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.581279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.581614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.581625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.581944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.581954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.582354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.582365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.582674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.582684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.582998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.583008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.583330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.583342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.583644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.583655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.583980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.583991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.584316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.584328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.584643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.584654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.584984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.584996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.585317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.585327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.585649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.585660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.585885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.585895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.586230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.586241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.586541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.586552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.586867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.586877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.587202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.587212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.587426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.587436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.587766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.587776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.588171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.588182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.588501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.588513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.588856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.588866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.589210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.589221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.589572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.589582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.589811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.589821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.590147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.590157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.590478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.590488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.148 qpair failed and we were unable to recover it. 00:29:48.148 [2024-06-10 12:33:53.590808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.148 [2024-06-10 12:33:53.590819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.591139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.591150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.591342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.591354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.591695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.591706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.592026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.592037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.592351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.592363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.592659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.592670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.592996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.593007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.593321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.593333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.593677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.593688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.594027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.594038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.594357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.594367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.594697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.594707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.595036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.595046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.595239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.595249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.595567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.595577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.595897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.595907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.596218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.596228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.596560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.596570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.596913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.596923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.597318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.597329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.597627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.597639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.597983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.597993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.598314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.598326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.598649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.598660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.598985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.598995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.599327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.599338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.599671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.599681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.600011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.600022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.600343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.600353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.600701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.600711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.601031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.149 [2024-06-10 12:33:53.601042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.149 qpair failed and we were unable to recover it. 00:29:48.149 [2024-06-10 12:33:53.601404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.601417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.601737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.601748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.602098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.602108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.602435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.602447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.602765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.602777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.603101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.603111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.603432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.603443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.603763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.603774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.604090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.604100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.604447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.604458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.604797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.604807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.605138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.605148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.605508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.605518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.605840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.605851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.606203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.606214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.606521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.606532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.606725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.606735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.607035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.607046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.607356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.607366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.607693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.607703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.608024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.608035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.608356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.608367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.608707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.608718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.609037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.609049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.609395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.609406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.609721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.609732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.610077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.610087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.610305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.610318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.610651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.610661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.610982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.610993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.611323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.611334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.611731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.611741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.612034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.612045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.612362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.612373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.612718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.612729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.613048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.613058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.613372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.613383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.613735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.613745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.150 [2024-06-10 12:33:53.614047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.150 [2024-06-10 12:33:53.614059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.150 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.614376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.614386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.614705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.614715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.615062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.615072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.615425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.615436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.615753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.615764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.616084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.616094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.616415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.616425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.616776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.616786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.617122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.617132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.617468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.617479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.617806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.617817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.618153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.618163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.618487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.618497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.618819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.618831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.619155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.619166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.619475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.619488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.619807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.619818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.620102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.620112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.620455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.620466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.620808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.620819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.621143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.621154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.621477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.621488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.621838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.621849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.622231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.622241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.622553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.622563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.622875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.622886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.623284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.623294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.623606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.623618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.623845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.623855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.624175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.624185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.624513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.624524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.624873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.624883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.625206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.625217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.625461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.625472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.625805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.625815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.626030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.626040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.626380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.626391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.626713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.626723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.151 qpair failed and we were unable to recover it. 00:29:48.151 [2024-06-10 12:33:53.627038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.151 [2024-06-10 12:33:53.627049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.627298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.627308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.627640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.627651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.627963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.627974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.628303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.628313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.628665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.628676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.628993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.629004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.629324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.629335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.629655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.629666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.630008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.630018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.630288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.630612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.630623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.630945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.630955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.631303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.631314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.631648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.631658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.631980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.631991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.632183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.632197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.632524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.632535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.632855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.632865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.633186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.633203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.633514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.633524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.633863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.633874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.634223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.634234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.634568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.634578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.634922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.634932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.635285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.635296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.635689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.635699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.636011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.636022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.636323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.636333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.636686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.636696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.637102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.637113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.637430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.637441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.637775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.637786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.638128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.638138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.638409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.638419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.638782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.638792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.152 [2024-06-10 12:33:53.639112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.152 [2024-06-10 12:33:53.639123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.152 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.639315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.639327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.639554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.639564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.639895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.639907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.640227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.640239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.640575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.640586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.640907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.640917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.641240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.641254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.641588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.641599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.641940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.641953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.642238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.642248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.642566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.642577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.642769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.642780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.643075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.643086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.643429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.643440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.643666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.643676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.643995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.644005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.644362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.644373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.644705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.644717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.645035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.645045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.645366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.645377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.645724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.645735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.646053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.646064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.646373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.646383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.646719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.646729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.647065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.647075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.647430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.647441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.647775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.648098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.648110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.648438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.648449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.648775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.648785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.649106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.649118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.153 [2024-06-10 12:33:53.649450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.153 [2024-06-10 12:33:53.649461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.153 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.649808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.649818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.650137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.650147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.650470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.650481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.650826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.650838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.651182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.651192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.651536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.651547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.651947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.651958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.652275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.652285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.652611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.652623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.652946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.652956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.653152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.653162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.653489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.653499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.653805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.653817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.654128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.654138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.654318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.654329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.654637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.654648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.655002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.655013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.655354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.655365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.655685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.655696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.656028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.656038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.656372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.656383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.656708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.656719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.657044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.657054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.657394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.657405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.657719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.657730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.658063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.658073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.658374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.658385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.658759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.658769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.659079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.659090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.659432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.659443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.659764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.659775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.660144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.660154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.660474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.660486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.660811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.660822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.661149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.661160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.661516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.661526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.661869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.661880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.154 [2024-06-10 12:33:53.662221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.154 [2024-06-10 12:33:53.662232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.154 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.662551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.662561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.662885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.662895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.663234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.663245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.663568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.663579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.663898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.663908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.664230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.664241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.664549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.664559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.664877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.664888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.665208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.665219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.665524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.665535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.665911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.665921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.666232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.666244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.666542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.666553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.666874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.666884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.667227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.667238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.667537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.667547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.667902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.667913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.668225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.668235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.668567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.668578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.668804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.668814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.669031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.669374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.669385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.669732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.669743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.670108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.670118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.670439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.670450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.670770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.670781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.671094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.671105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.671446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.671457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.671776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.671787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.672108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.672118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.672435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.672446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.672769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.672781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.673204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.673215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.673502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.673514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.673857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.673868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.674235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.674246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.674523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.674535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.674717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.674728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.155 qpair failed and we were unable to recover it. 00:29:48.155 [2024-06-10 12:33:53.675108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.155 [2024-06-10 12:33:53.675119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.675442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.675453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.675773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.675783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.676125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.676135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.676474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.676484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.676801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.676811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.677135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.677146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.677474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.677485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.677837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.677847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.678175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.678186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.678533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.678544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.678864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.678874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.679105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.679115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.679438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.679449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.679771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.679782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.680104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.680115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.680438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.680449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.680769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.680779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.681092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.681103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.681449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.681459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.681803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.681814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.682137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.682148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.682470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.682483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.682808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.682819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.683167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.683178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.683504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.683516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.683779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.683790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.684112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.684124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.684433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.684445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.684637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.684649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.684989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.685000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.685364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.685374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.685723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.685734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.685920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.685930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.686226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.686237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.686579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.686590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.686934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.686945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.687262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.687272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.156 [2024-06-10 12:33:53.687671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.156 [2024-06-10 12:33:53.687681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.156 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.688019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.688029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.688372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.688384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.688699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.688709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.689029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.689041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.689359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.689369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.689722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.689732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.689950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.689961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.690305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.690316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.690625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.690637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.690976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.690986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.691366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.691378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.691759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.691769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.692092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.692103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.692420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.692431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.692797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.692808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.693128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.693139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.693508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.693518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.693711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.693722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.693974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.693986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.694305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.694315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.694604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.694615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.694829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.694839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.695162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.695173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.695459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.695470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.695783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.695794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.696103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.696114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.696439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.696449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.696768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.696779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.697129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.697140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.697454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.697464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.697794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.697805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.698142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.698153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.698457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.698468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.698687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.698698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.698990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.699002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.699330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.699341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.699680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.699690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.157 qpair failed and we were unable to recover it. 00:29:48.157 [2024-06-10 12:33:53.699967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.157 [2024-06-10 12:33:53.699977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.700304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.700315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.700638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.700649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.700971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.700982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.701288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.701300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.701706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.701716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.701946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.701956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.702236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.702247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.702570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.702580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.702898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.702908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.703307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.703318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.703655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.703665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.704047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.704057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.704384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.704394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.704715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.704726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.705047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.705058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.705375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.705387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.705721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.705732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.706051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.706061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.706375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.706386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.706696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.706707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.707030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.707040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.707358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.707369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.707689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.707699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.708046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.708057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.708373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.708384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.708723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.709043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.709054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.709396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.709406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.709726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.709736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.710054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.710064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.710407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.710418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.710762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.158 [2024-06-10 12:33:53.710773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.158 qpair failed and we were unable to recover it. 00:29:48.158 [2024-06-10 12:33:53.710959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.710970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.711224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.711234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.711437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.711447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.711752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.711762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.712074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.712085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.712272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.712284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.712632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.712643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.712993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.713004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.713349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.713361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.713688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.713698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.714017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.714028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.714347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.714357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.714530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.714540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.714777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.714789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.714995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.715005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.715352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.715363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.715681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.715693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.716014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.716025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.716361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.716372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.716715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.716726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.717046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.717056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.717373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.717384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.717714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.717724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.718067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.718078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.718421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.718432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.718744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.718755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.719068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.719079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.719418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.719428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.719741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.719752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.720042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.720053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.720378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.720388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.720682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.720692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.721048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.721059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.721378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.721388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.721705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.721715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.722063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.722080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.722344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.722354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.159 qpair failed and we were unable to recover it. 00:29:48.159 [2024-06-10 12:33:53.722672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.159 [2024-06-10 12:33:53.722683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.723011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.723021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.723400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.723411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.723730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.723741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.723999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.724009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.724298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.724308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.724641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.724652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.724963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.724973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.725294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.725304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.725653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.725663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.726011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.726021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.726349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.726360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.160 [2024-06-10 12:33:53.726711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.160 [2024-06-10 12:33:53.726722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.160 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.727047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.727060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.727406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.727417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.727626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.727636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.727958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.727969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.728291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.728302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.728642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.728652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.728973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.728983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.729304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.729316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.729668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.729678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.730020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.730032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.730353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.730364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.730707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.730718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.731038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.731048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.731395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.731406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.731724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.731735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.732053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.436 [2024-06-10 12:33:53.732064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.436 qpair failed and we were unable to recover it. 00:29:48.436 [2024-06-10 12:33:53.732458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.732468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.732809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.732820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.733140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.733152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.733401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.733413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.733738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.733751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.734038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.734049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.734367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.734378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.734647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.734658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.734996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.735006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.735362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.735373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.735691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.735703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.736021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.736031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.736375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.736386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.736728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.736739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.737059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.737069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.737378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.737390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.737729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.737739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.738036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.738047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.738204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.738216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.738548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.738558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.738928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.738938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.739278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.739290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.739611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.739621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.739942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.739954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.740271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.740282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.740601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.740612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.740934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.740944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.741165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.741175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.741399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.741410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.741765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.741775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.742095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.742106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.742416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.742427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.742753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.742764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.743123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.743133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.743452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.743464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.743783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.743793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.744118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.744128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.744476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.744488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.437 qpair failed and we were unable to recover it. 00:29:48.437 [2024-06-10 12:33:53.744812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.437 [2024-06-10 12:33:53.744823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.745142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.745153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.745484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.745495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.745842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.745853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.746045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.746056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.746336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.746346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.746713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.746726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.747067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.747080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.747419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.747432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.747754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.747765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.748086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.748097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.748315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.748326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.748555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.748565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.748877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.748887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.749267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.749278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.749623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.749633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.749951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.749961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.750151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.750163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.750475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.750486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.750834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.750844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.751163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.751174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.751500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.751511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.751865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.751877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.752071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.752083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.752389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.752400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.752720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.752730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.753046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.753061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 853059 Killed "${NVMF_APP[@]}" "$@" 00:29:48.438 [2024-06-10 12:33:53.753409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.753419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.753741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.753752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:48.438 [2024-06-10 12:33:53.754131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.754142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:48.438 [2024-06-10 12:33:53.754460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.754471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.438 [2024-06-10 12:33:53.754818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.754828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.438 [2024-06-10 12:33:53.755164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.755175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.755498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.755509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.755826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.755837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.756150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.438 [2024-06-10 12:33:53.756160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.438 qpair failed and we were unable to recover it. 00:29:48.438 [2024-06-10 12:33:53.756481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.756494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.756817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.756830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.757154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.757164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.757455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.757465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.757841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.757851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.758161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.758172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.758523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.758534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.758923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.758934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.759260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.759270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.759504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.759514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.759900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.759910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.760156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.760167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.760443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.760454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.760735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.760747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.760928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.760938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.761239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.761251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.761475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.761486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.761813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.761824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.762018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.762028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.762334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.762346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=854043 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 854043 00:29:48.439 [2024-06-10 12:33:53.762682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.762693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 854043 ']' 00:29:48.439 [2024-06-10 12:33:53.763050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.763060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.763278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.439 [2024-06-10 12:33:53.763288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:48.439 [2024-06-10 12:33:53.763610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.763620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:48.439 [2024-06-10 12:33:53.763952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.763966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 12:33:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.439 [2024-06-10 12:33:53.764282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.764293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.764487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.764497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.764734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.764745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.765068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.765078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.765393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.765404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.765748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.765759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.766099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.766111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.766431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.766441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.439 [2024-06-10 12:33:53.766742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.439 [2024-06-10 12:33:53.766753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.439 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.767090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.767101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.767431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.767442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.767783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.767794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.768113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.768124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.768448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.768460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.768750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.768761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.769069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.769080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.769393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.769405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.769745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.769756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.769976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.769987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.770317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.770328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.770644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.770655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.770874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.770886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.771184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.771199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.771416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.771427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.771749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.771760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.772084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.772095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.772332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.772344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.772631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.772643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.772993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.773005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.773339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.773351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.773698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.773708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.774020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.774031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.774345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.774356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.774699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.774710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.775052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.775064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.775379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.775390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.775704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.775715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.776015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.776027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.776347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.776358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.776684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.776695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.777009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.777020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.777258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.777269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.777562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.777574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.777926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.777938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.778158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.778169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.778499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.778510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.440 qpair failed and we were unable to recover it. 00:29:48.440 [2024-06-10 12:33:53.778829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.440 [2024-06-10 12:33:53.778840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.779159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.779169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.779404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.779415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.779747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.779758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.779944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.779956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.780275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.780286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.780621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.780632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.780931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.780944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.781153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.781165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.781403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.781719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.781730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.782076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.782087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.782313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.782324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.782507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.782519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.782857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.782868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.783213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.783225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.783545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.783556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.783863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.783874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.784166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.784177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.784509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.784521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.784871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.784882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.785208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.785220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.785550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.785560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.785875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.785886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.786178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.786189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.786498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.786509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.786859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.786869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.787128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.787138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.787490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.787500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.787811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.787822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.788201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.788212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.788504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.441 [2024-06-10 12:33:53.788516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.441 qpair failed and we were unable to recover it. 00:29:48.441 [2024-06-10 12:33:53.788712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.788722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.789036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.789047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.789372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.789385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.789704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.789716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.790057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.790068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.790220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.790230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.790549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.790560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.790877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.790887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.791247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.791258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.791439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.791449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.791794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.791805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.792124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.792134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.792325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.792336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.792622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.792633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.792830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.792840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.793020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.793032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.793211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.793224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.793412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.793422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.793761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.793772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.794087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.794098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.794431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.794441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.794758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.794769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.795091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.795102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.795509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.795520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.795840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.795851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.796173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.796184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.796493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.796504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.796835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.796847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.797155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.797166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.797354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.797369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.797561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.797572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.797780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.797791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.798120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.798131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.798468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.798480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.798533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.798543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.798811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.798822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.799152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.799164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.799527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.799538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.799858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.442 [2024-06-10 12:33:53.799868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.442 qpair failed and we were unable to recover it. 00:29:48.442 [2024-06-10 12:33:53.800098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.800108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.800348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.800359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.800702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.800713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.801032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.801044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.801367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.801378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.801674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.801685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.801987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.801997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.802239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.802250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.802463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.802473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.802830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.802840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.803198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.803209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.803531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.803541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.803890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.803901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.804223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.804233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.804558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.804568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.804908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.804919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.805295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.805306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.805654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.805664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.806024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.806035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.806357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.806369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.806700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.806710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.807042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.807054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.807357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.807368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.807706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.807716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.808032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.808043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.808358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.808368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.808678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.808688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.808871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.808881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.809270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.809281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.809637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.809648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.810012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.810022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.810345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.810358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.810561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.810571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.810914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.810926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.811172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.811183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.811371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.811383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.811682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.443 [2024-06-10 12:33:53.811693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.443 qpair failed and we were unable to recover it. 00:29:48.443 [2024-06-10 12:33:53.812020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.812031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.812268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.812279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.812602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.812612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.812939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.812950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.813259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.813269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.813460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.813470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.813763] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:48.444 [2024-06-10 12:33:53.813809] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.444 [2024-06-10 12:33:53.813811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.813822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.814147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.814157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.814586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.814597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.814885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.814896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.815219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.815230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.815563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.815574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.815904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.815915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.816281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.816292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.816697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.816708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.817036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.817047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.817364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.817376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.817731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.817743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.818062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.818073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.818329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.818340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.818723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.818735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.819093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.819104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.819468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.819480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.819813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.819824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.820187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.820215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.820593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.820604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.820939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.820950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.821285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.821296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.821650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.821661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.821959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.821970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.822238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.822249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.822538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.822549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.822894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.822905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.823261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.823272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.823462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.823473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.823763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.823774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.824100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.824111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.444 qpair failed and we were unable to recover it. 00:29:48.444 [2024-06-10 12:33:53.824443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.444 [2024-06-10 12:33:53.824454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.824789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.824801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.825004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.825015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.825311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.825323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.825635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.825646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.825965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.825976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.826291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.826302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.826647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.826657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.827005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.827016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.827333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.827344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.827630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.827643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.827967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.827977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.828336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.828348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.828664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.828675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.828990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.829002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.829322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.829333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.829691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.829701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.830015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.830026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.830345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.830356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.830549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.830561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.830740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.830752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.831074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.831085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.831282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.831293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.831600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.831610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.831965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.831976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.832317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.832328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.832650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.832661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.832982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.832993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.833345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.833356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.833670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.833681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.834001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.834012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.834241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.834253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.834504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.834514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.445 [2024-06-10 12:33:53.834840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.445 [2024-06-10 12:33:53.834852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.445 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.835225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.835236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.835590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.835601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.835792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.835804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.835995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.836008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.836261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.836272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.836683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.836694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.837043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.837055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.837334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.837346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.837662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.837673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.837998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.838009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.838204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.838214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.838408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.838419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.838741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.838752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.839075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.839085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.839278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.839289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.839591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.839601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.839963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.839974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.840329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.840340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.840679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.840689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.841043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.841053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.841383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.841393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.841682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.841693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.842054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.842064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.842396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.842406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.842786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.842797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.843131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.843142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.843470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.843481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.843809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.843821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.844033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.844044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.844339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.844350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.844681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.844692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.845017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.845028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.845208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.845220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.845347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.845357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.845733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.845743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.845913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.845924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.846145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.846156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.846482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.446 [2024-06-10 12:33:53.846494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.446 qpair failed and we were unable to recover it. 00:29:48.446 [2024-06-10 12:33:53.846697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.846707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.847058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.847068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.847360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.847370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.847731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.847741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.848100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.848110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.848385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.848395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.848701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.849017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.849028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.447 [2024-06-10 12:33:53.849360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.849372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.849695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.849706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.850028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.850039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.850376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.850388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.850756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.850767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.851003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.851013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.851355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.851366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.851727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.851738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.852051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.852062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.852244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.852256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.852501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.852511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.852704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.852717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.853112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.853123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.853463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.853475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.853803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.853814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.854219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.854231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.854463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.854473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.854792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.854803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.855127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.855139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.855336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.855347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.855706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.855717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.856038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.856049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.856271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.856282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.856646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.856657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.856880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.856890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.857224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.857235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.857469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.857479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.857806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.857816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.858176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.858187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.447 qpair failed and we were unable to recover it. 00:29:48.447 [2024-06-10 12:33:53.858541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.447 [2024-06-10 12:33:53.858551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.858902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.858912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.859219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.859231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.859606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.859616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.859935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.859946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.860280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.860290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.860673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.860684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.861010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.861021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.861350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.861362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.861697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.861706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.862029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.862039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.862378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.862390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.862717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.862727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.863050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.863061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.863286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.863296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.863647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.863657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.863911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.863921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.864257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.864267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.864564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.864574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.864929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.864939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.865257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.865268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.865615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.865625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.865951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.865962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.866316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.866328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.866551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.866561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.866881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.866892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.867216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.867226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.867586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.867597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.867759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.867769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.868111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.868122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.868526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.868537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.868712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.868721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.868924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.868934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.869277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.869288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.869581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.869592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.869922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.869933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.870147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.870157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.870509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.870520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.448 qpair failed and we were unable to recover it. 00:29:48.448 [2024-06-10 12:33:53.870844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.448 [2024-06-10 12:33:53.870855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.871158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.871168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.871478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.871489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.871812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.871823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.871983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.871993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.872178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.872188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.872555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.872566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.872770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.872779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.873132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.873143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.873466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.873477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.873536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.873544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.873852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.873862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.874183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.874204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.874552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.874564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.874859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.874870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.875227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.875239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.875566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.875577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.875929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.875941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.876251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.876262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.876579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.876590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.876896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.876907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.877260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.877272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.877623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.877635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.877989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.878000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.878333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.878345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.878687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.878697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.878999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.879010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.879183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.879198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.879550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.879560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.879883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.879894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.880252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.880263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.880596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.880607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.880926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.880936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.881138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.881150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.881473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.881484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.881808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.881818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.882030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.882040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.882394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.882405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.449 [2024-06-10 12:33:53.882751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.449 [2024-06-10 12:33:53.882762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.449 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.883075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.883088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.883420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.883431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.883748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.883758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.884144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.884156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.884341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.884352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.884699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.884710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.885024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.885035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.885428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.885439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.885756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.885767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.886159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.886170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.886359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.886370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.886666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.886677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.887005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.887016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.887259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.887271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.887501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.887511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.887811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.887821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.888104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.888114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.888413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.888423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.888760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.888770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.888966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.888976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.889328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.889338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.889658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.889669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.889961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.889971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.890332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.890342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.890677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.890686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.891029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.891038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.891225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.891234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.891535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.891544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.891840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.891849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.892174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.892183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.892513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.892523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.892875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.892885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.450 [2024-06-10 12:33:53.893202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.450 [2024-06-10 12:33:53.893212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.450 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.893551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.893562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.893888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.894229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.894240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.894517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.894528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.894705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.894715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.895030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.895041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.895358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.895370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.895709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.895720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.896038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.896049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.896372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.896384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.896692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.896703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.897042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.897053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.897376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.897387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.897607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.897618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.897969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.897981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.898273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.898284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.898618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.898629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.898814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.898826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.899177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.899188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.899551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.899563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.899878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.899889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.900212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.900223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.900557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.900568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.900884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.900896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.901220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.901231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.901544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.901556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.901893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.901904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.902220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.902231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.902577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.902588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.902894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.902905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.903205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.903216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.903553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.903563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.903881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.903892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.904209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.904220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.904429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.451 [2024-06-10 12:33:53.904588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.904598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.904918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.904929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.451 [2024-06-10 12:33:53.905255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.451 [2024-06-10 12:33:53.905266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.451 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.905618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.905629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.905942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.905954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.906274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.906285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.906634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.906956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.906966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.907025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.907035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.907324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.907334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.907522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.907533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.907733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.907745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.908027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.908038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.908390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.908402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.908603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.908613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.908919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.908930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.909143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.909153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.909474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.909486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.909798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.909810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.910015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.910026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.910302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.910314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.910662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.910673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.911036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.911048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.911385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.911396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.911720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.911731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.912090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.912101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.912429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.912440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.912763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.912774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.913146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.913157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.913505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.913516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.913857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.913868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.914086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.914097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.914454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.914465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.914756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.914766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.915081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.915093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.915378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.915389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.915604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.915614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.915925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.915937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.916255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.916266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.916626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.916637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.452 [2024-06-10 12:33:53.917024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.452 [2024-06-10 12:33:53.917034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.452 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.917351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.917363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.917682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.917693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.918018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.918029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.918354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.918365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.918719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.918729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.918788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.918798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.918995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.919005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.919326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.919337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.919617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.919627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.919910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.919921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.920116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.920127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.920418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.920429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.920791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.920802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.921149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.921160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.921351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.921363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.921752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.921763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.922063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.922075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.922399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.922410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.922738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.922748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.923048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.923058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.923377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.923388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.923678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.923688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.924049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.924060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.924236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.924247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.924558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.924569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.924884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.924895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.925213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.925224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.925548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.925559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.925888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.925899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.926245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.926256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.926561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.926572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.926891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.926901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.927207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.927219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.927551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.927562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.927743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.927754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.927958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.927969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.928292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.928303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.453 [2024-06-10 12:33:53.928659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.453 [2024-06-10 12:33:53.928671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.453 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.928983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.928994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.929322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.929333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.929556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.929566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.929753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.929765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.930059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.930069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.930385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.930397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.930710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.930721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.931054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.931065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.931394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.931405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.931720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.931731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.932046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.932057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.932402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.932423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.932752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.932763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.933089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.933100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.933429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.933440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.933775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.933787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.934142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.934153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.934406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.934417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.934730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.934740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.935066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.935078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.935453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.935463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.935790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.935802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.936133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.936144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.936456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.936468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.936766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.936777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.936986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.936996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.937320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.937331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.937686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.937697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.937888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.937900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.938234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.938245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.938606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.938620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.938919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.938930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.939301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.939312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.939509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.939520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.939825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.939835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.940172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.940182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.940404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.940415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.454 qpair failed and we were unable to recover it. 00:29:48.454 [2024-06-10 12:33:53.940739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.454 [2024-06-10 12:33:53.940751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.941071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.941083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.941425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.941436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.941698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.941708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.942061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.942071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.942405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.942416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.942792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.942804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.943132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.943143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.943381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.943392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.943598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.943608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.943952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.943963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.944158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.944168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.944391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.944402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.944724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.944734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.944946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.944956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.945254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.945265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.945602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.945612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.945944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.945954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.946301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.946313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.946634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.946644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.946963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.946973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.947290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.947301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.947622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.947632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.947950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.947961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.948184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.948203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.948303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.948312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.948632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.948643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.949006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.949017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.949355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.949366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.949699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.949709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.950045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.950056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.950378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.950388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.950707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.950717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.455 [2024-06-10 12:33:53.951032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.455 [2024-06-10 12:33:53.951043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.455 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.951368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.951379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.951570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.951580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.951801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.951812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.952108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.952119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.952454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.952466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.952794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.952806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.953131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.953142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.953363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.953374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.953699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.953711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.954017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.954028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.954352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.954363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.954601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.954611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.954944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.954955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.955271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.955282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.955601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.955612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.955929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.955941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.956129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.956140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.956436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.956448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.956782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.956792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.957152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.957162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.957530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.957541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.957591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.957599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.957914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.957925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.958257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.958270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.958612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.958622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.958955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.958967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.959280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.959291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.959603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.959616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.959938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.959948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.960134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.960145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.960482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.960493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.960816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.960827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.961119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.961129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.961322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.961332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.961652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.961663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.961977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.961988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.962323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.962333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.456 [2024-06-10 12:33:53.962565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.456 [2024-06-10 12:33:53.962576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.456 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.962898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.962908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.963230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.963241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.963585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.963596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.963791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.963801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.964134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.964144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.964454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.964466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.964694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.964704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.965057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.965067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.965217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.965229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.965522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.965532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.965843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.965854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.966065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.966075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.966274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.966284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.966586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.966597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.966907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.966918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.967272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.967284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.967610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.967623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.967942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.967953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.968280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.968290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.968468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.968478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.968678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.968689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.969025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.969035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.969235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.457 [2024-06-10 12:33:53.969262] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.457 [2024-06-10 12:33:53.969270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.457 [2024-06-10 12:33:53.969276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.457 [2024-06-10 12:33:53.969282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.457 [2024-06-10 12:33:53.969355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.969366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.969426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.969436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.969422] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:48.457 [2024-06-10 12:33:53.969557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:48.457 [2024-06-10 12:33:53.969723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.457 [2024-06-10 12:33:53.969778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.969788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.969725] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:48.457 [2024-06-10 12:33:53.970115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.970126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.970461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.970472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.970804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.970815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.971150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.971161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.971466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.971477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.971759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.971771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.972105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.972116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.972329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.972339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.457 [2024-06-10 12:33:53.972666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.457 [2024-06-10 12:33:53.972676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.457 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.972906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.972916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.973187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.973201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.973514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.973524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.973843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.973854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.974080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.974089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.974162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.974173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.974390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.974404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.974634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.974644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.975088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.975098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.975327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.975337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.975542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.975553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.975861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.975871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.976102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.976112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.976384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.976394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.976613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.976623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.976831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.976842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.977058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.977071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.977408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.977418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.977750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.977761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.977984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.977994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.978414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.978425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.978582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.978592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.978937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.978948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.979288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.979300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.979467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.979479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.979707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.979717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.980037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.980048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.980369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.980380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.980697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.980707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.981031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.981042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.981314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.981324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.981676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.981686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.981909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.981919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.982145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.982157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.982371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.982382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.982726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.982736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.982944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.982954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.983163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.458 [2024-06-10 12:33:53.983173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.458 qpair failed and we were unable to recover it. 00:29:48.458 [2024-06-10 12:33:53.983511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.983523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.983835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.983847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.984183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.984199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.984495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.984505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.984718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.984729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.985071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.985082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.985181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.985190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.985534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.985545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.985863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.985875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.986072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.986083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.986410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.986421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.986740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.986750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.987076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.987087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.987379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.987391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.987714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.987725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.987918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.987928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.988250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.988262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.988328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.988337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.988688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.988699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.989017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.989028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.989357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.989369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.989581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.989592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.989897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.989909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.990105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.990115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.990416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.990426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.990755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.990766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.991123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.991135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.991467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.991478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.991802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.991813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.992170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.992181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.992517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.992527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.992859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.992871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.993203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.459 [2024-06-10 12:33:53.993215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.459 qpair failed and we were unable to recover it. 00:29:48.459 [2024-06-10 12:33:53.993384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.993394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.993572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.993582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.993927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.993938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.994163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.994173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.994466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.994478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.994825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.994837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.995162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.995172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.995566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.995579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.995907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.995917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.996123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.996134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.996466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.996478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.996822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.996833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.997148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.997159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.997352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.997363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.997707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.997717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.997788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.997796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.998024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.998035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.998372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.998383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.998575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.998585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.998691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.998702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.999060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.999071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.999390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.999401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.999565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.999576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:53.999933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:53.999944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.000276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.000287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.000620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.000630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.000955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.000966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.001322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.001333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.001566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.001576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.001905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.001916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.002268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.002281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.002594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.002605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.002940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.002951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.003277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.003290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.003530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.003541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.003862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.003873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.004078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.004089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.004426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.460 [2024-06-10 12:33:54.004437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.460 qpair failed and we were unable to recover it. 00:29:48.460 [2024-06-10 12:33:54.004798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.004809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.004997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.005007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.005206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.005216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.005527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.005537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.005853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.005864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.006212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.006224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.006587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.006598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.006963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.006974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.007170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.007180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.007367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.007380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.007693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.007704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.008036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.008047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.008377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.008390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.008738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.008749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.009048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.009060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.009379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.009391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.009585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.009595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.009848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.009858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.010182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.010193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.010542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.010556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.010741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.010751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.011068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.011078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.011421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.011431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.011631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.011641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.011992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.012002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.012202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.012213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.012378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.012389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.012683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.012693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.013074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.013084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.013482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.013492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.013684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.013694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.013901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.013911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.014171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.014182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.014368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.014378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.014727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.014737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.015067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.015078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.015436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.015447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.015809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.461 [2024-06-10 12:33:54.015820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.461 qpair failed and we were unable to recover it. 00:29:48.461 [2024-06-10 12:33:54.016015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.016027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.016233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.016244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.016540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.016550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.016888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.016898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.017087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.017097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.017394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.017404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.017735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.017745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.017950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.017960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.018252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.018265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.018590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.018601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.018949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.018959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.019319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.019330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.019679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.019689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.020008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.020018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.020371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.020383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.020619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.020629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.020951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.020961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.021288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.021299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.021488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.021499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.021798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.021809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.022132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.022143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.022452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.022462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.022788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.022800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.023152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.023163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.023510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.023521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.023846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.023857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.024027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.024038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.024256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.024267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.462 [2024-06-10 12:33:54.024557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.462 [2024-06-10 12:33:54.024567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.462 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.024943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.024954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.025236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.025259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.025603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.025613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.025668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.025677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.025971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.025981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.026190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.026206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.026526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.026538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.026728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.026739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.027069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.027080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.027355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.027366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.027609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.027620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.027924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.027936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.028202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.028214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.028548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.028559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.028883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.028894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.737 [2024-06-10 12:33:54.029084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.737 [2024-06-10 12:33:54.029096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.737 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.029268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.029280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.029440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.029451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.029786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.029797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.030147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.030158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.030485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.030497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.030845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.030857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.031206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.031218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.031556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.031567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.031889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.031900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.032225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.032236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.032448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.032460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.032816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.032827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.033149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.033160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.033480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.033492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.033817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.033829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.034063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.034073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.034411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.034422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.034617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.034628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.034979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.034991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.035321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.035333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.035651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.035662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.035986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.035997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.036345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.036356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.036542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.036553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.036861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.036872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.037210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.037222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.037521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.037533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.037892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.037903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.038244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.038255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.038446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.038458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.038732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.038744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.038936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.038950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.039330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.039341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.039574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.039586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.039873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.039884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.040239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.040250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.040519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.738 [2024-06-10 12:33:54.040530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.738 qpair failed and we were unable to recover it. 00:29:48.738 [2024-06-10 12:33:54.040856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.040868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.041061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.041071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.041400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.041412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.041607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.041618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.041791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.041803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.042146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.042157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.042513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.042524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.042790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.042801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.042980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.042991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.043336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.043348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.043659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.043671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.044008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.044020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.044353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.044365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.044537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.044548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.044656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.044667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.044991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.045003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.045359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.045369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.045717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.045728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.046058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.046069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.046418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.046429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.046755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.046765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.047138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.047151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.047339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.047349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.047542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.047553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.047733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.047744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.048075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.048085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.048451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.048462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.048665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.048675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.048969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.048980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.049308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.049320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.049681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.049692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.050051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.050063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.050377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.050388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.050590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.050600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.050946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.050957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.051186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.051205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.051529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.051540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.739 [2024-06-10 12:33:54.051876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.739 [2024-06-10 12:33:54.051887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.739 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.052223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.052569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.052579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.052771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.052781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.053076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.053086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.053282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.053292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.053459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.053471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.053809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.053819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.054161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.054171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.054346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.054356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.054679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.054689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.055005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.055015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.055340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.055350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.055662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.055672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.055894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.055904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.056235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.056247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.056554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.056565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.056931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.056941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.057293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.057306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.057498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.057509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.057676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.057687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.057879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.057890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.058236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.058246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.058578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.058588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.058918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.058928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.059251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.059262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.059585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.059596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.059776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.059787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.060124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.060135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.060457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.060469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.060818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.060828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.061142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.061152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.061481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.061493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.061793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.061804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.062156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.062167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.062502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.062515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.062739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.062749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.063074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.063084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.740 qpair failed and we were unable to recover it. 00:29:48.740 [2024-06-10 12:33:54.063413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.740 [2024-06-10 12:33:54.063424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.063694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.063704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.063897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.063907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.064200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.064212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.064525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.064535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.064877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.064888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.065250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.065261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.065554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.065564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.065879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.065889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.066188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.066205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.066412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.066423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.066742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.066753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.067094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.067104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.067500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.067511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.067826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.067839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.068104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.068114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.068504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.068515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.068851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.068862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.069180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.069190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.069536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.069546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.069863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.069875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.070064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.070075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.070370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.070381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.070618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.070628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.070850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.070860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.071159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.071169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.071507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.071518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.071840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.071851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.072200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.072211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.072550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.072561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.072881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.072891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.073201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.073391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.073401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.073740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.073751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.074075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.074087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.074414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.074424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.074769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.074780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.075089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.075100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.741 qpair failed and we were unable to recover it. 00:29:48.741 [2024-06-10 12:33:54.075439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.741 [2024-06-10 12:33:54.075449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.075646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.075656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.075971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.075981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.076312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.076326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.076660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.076671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.076865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.076875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.077056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.077066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.077380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.077390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.077713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.077724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.078040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.078051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.078391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.078402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.078774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.078785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.078975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.078986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.079416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.079426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.079622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.079632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.079973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.079984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.080287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.080299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.080537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.080547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.080848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.080860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.081163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.081174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.081499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.081511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.081838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.081848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.082037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.082047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.082359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.082370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.082691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.082702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.083057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.083068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.083259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.083270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.083502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.083512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.083835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.083846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.084189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.084204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.084550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.084563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.742 [2024-06-10 12:33:54.084884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.742 [2024-06-10 12:33:54.084895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.742 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.085203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.085214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.085514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.085525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.085760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.085770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.086094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.086105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.086530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.086541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.086731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.086742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.087057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.087069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.087374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.087385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.087785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.087796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.088113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.088123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.088325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.088336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.088636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.088646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.088979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.088991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.089316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.089327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.089682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.089693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.090029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.090041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.090363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.090374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.090718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.090729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.090957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.090968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.091339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.091350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.091679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.091689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.092014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.092025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.092411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.092422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.092750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.092763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.093090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.093101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.093319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.093331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.093521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.093531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.093837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.093848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.094172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.094183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.094421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.094434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.094778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.094788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.095114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.095124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.095458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.095470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.095788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.095798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.095995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.096005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.096341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.096352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.096674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.096685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.743 [2024-06-10 12:33:54.096901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.743 [2024-06-10 12:33:54.096911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.743 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.097257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.097268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.097606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.097616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.097939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.097950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.098137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.098147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.098451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.098462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.098648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.098658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.098861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.098871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.099174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.099185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.099496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.099509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.099694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.099705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.100044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.100054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.100361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.100371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.100706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.100717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.101033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.101044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.101370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.101381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.101730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.101741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.102108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.102118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.102426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.102437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.102770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.102781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.103098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.103108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.103428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.103439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.103778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.103788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.103839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.103847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.104012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.104023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.104356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.104366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.104721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.104732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.104921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.104932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.105138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.105149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.105465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.105478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.105838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.105848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.106168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.106179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.106434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.106444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.106810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.106821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.107133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.107144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.107334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.107346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.107681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.107692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.108057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.108069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.108422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.744 [2024-06-10 12:33:54.108433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.744 qpair failed and we were unable to recover it. 00:29:48.744 [2024-06-10 12:33:54.108761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.108773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.109089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.109099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.109445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.109456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.109803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.109814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.110155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.110165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.110495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.110506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.110832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.110843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.111036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.111046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.111382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.111393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.111733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.111743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.111938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.111948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.112254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.112265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.112442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.112452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.112746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.112756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.113141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.113151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.113385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.113395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.113787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.113798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.114122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.114135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.114463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.114474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.114670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.114680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.114984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.114996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.115342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.115352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.115544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.115554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.115744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.115755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.116091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.116101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.116427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.116440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.116764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.116775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.117077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.117089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.117426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.117437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.117759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.117769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.117955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.117966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.118142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.118153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.118448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.118459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.118777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.118789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.119115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.119125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.119456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.119468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.119797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.119807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.120129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.745 [2024-06-10 12:33:54.120140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.745 qpair failed and we were unable to recover it. 00:29:48.745 [2024-06-10 12:33:54.120450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.120461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.120808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.120819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.121170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.121182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.121510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.121520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.121848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.121860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.122047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.122058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.122379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.122390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.122733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.122744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.123056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.123068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.123253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.123264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.123556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.123567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.123757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.123766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.124076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.124087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.124279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.124289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.124508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.124519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.124834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.124844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.125191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.125209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.125553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.125564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.125888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.125899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.126232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.126244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.126589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.126600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.126950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.126961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.127289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.127300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.127641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.127651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.127972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.127983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.128301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.128312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.128641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.128652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.128978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.128988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.129318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.129329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.129537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.129547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.129857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.129868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.130204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.130214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.130527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.130538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.746 [2024-06-10 12:33:54.130863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.746 [2024-06-10 12:33:54.130873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.746 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.131232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.131243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.131575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.131587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.131770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.131780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.132161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.132172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.132385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.132395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.132609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.132620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.132965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.132976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.133325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.133336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.133706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.133718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.134047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.134058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.134410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.134421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.134769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.134779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.135072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.135082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.135420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.135433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.135787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.135798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.135991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.136001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.136333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.136344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.136673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.136684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.137031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.137042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.137392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.137403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.137683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.137695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.138096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.138108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.138297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.138308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.138673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.138684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.138885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.138896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.139092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.139103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.139427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.139439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.139753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.139765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.140089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.140101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.140422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.140433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.140775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.140786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.140977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.140988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.141329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.141339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.141664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.141675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.142035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.142046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.142276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.142287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.142538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.142548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.142924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.142934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.747 qpair failed and we were unable to recover it. 00:29:48.747 [2024-06-10 12:33:54.143121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.747 [2024-06-10 12:33:54.143131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.143464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.143474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.143803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.143817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.144146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.144156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.144475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.144486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.144838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.144848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.145147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.145158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.145497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.145508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.145855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.145865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.146180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.146192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.146497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.146507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.146840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.146851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.147174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.147185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.147504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.147515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.147709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.147718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.147926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.147937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.148270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.148281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.148641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.148652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.148973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.148983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.149311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.149321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.149667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.149678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.150042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.150052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.150314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.150325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.150651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.150662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.150989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.151000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.151049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.151057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.151363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.151373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.151703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.151714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.151916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.151928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.152246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.152259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.152647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.152658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.152959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.152970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.153310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.153320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.153653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.153663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.153849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.153859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.154202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.154213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.154544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.154555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.154866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.154877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.155224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-06-10 12:33:54.155234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.748 qpair failed and we were unable to recover it. 00:29:48.748 [2024-06-10 12:33:54.155572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.155583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.155794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.155804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.155987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.155998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.156326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.156336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.156669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.156681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.157008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.157020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.157341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.157351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.157402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.157411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.157731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.157743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.158100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.158110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.158437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.158449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.158644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.158655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.158921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.158931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.159123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.159135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.159425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.159436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.159625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.159635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.159967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.159978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.160175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.160187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.160387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.160399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.160731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.160743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.161092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.161104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.161439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.161451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.161772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.161783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.162132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.162144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.162469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.162480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.162671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.162682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.163003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.163015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.163341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.163352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.163545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.163555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.163785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.163796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.164129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.164140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.164326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.164339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.164635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.164645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.164969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.164980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.165167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.165177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.165510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.165522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.165852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.165863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.166203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.166214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.749 qpair failed and we were unable to recover it. 00:29:48.749 [2024-06-10 12:33:54.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-06-10 12:33:54.166414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.166605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.166616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.166936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.166947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.167309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.167320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.167649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.167659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.167988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.167999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.168357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.168368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.168697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.168709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.168897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.168908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.169080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.169089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.169283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.169294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.169463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.169474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.169796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.169807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.170077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.170087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.170280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.170291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.170471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.170483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.170751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.170761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.171089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.171100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.171295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.171307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.171649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.171660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.171981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.172308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.172319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.172630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.172641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.172935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.172945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.173223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.173234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.173511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.173521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.173855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.173865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.174208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.174219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.174544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.174555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.174888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.174899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.175113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.175123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.175316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.175328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.175607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.175618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.175941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.175951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.176297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.176309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.176630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.176640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.176832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.176842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.177128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.177139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.750 [2024-06-10 12:33:54.177496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-06-10 12:33:54.177507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.750 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.177835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.177847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.178170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.178180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.178496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.178507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.178862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.178872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.179059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.179069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.179354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.179365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.179705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.179715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.180062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.180074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.180371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.180384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.180721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.180731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.181055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.181066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.181258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.181269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.181604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.181614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.181936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.181947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.182275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.182286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.182643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.182653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.182978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.182990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.183329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.183340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.183732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.183743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.184099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.184110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.184444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.184456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.184793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.184804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.185129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.185139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.185460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.185471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.185660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.185672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.185995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.186005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.186329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.186340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.186534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.186544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.186877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.186887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.187254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.187265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.187635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.187645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.187975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.187985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.188323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.188335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.751 qpair failed and we were unable to recover it. 00:29:48.751 [2024-06-10 12:33:54.188526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.751 [2024-06-10 12:33:54.188537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.188753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.188763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.189087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.189097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.189450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.189461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.189658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.189668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.189876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.189888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.190201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.190212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.190516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.190527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.190854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.190865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.191199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.191210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.191406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.191417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.191758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.191769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.191821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.191831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.192141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.192152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.192369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.192380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.192566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.192577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.192869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.192881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.193053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.193063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.193292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.193303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.193651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.193661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.193984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.193995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.194325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.194335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.194691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.194702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.194757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.194766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.195078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.195090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.195438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.195449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.195776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.195787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.196181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.196191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.196542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.196554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.196880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.196890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.197086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.197095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.197394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.197406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.197597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.197607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.197909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.197919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.198094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.198105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.198432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.198443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.198788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.198798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.199145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.752 [2024-06-10 12:33:54.199155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.752 qpair failed and we were unable to recover it. 00:29:48.752 [2024-06-10 12:33:54.199386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.199396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.199722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.199733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.200063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.200075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.200416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.200427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.200803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.200814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.201033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.201045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.201385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.201396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.201752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.201764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.202092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.202104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.202448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.202459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.202656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.202666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.202990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.203002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.203330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.203341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.203527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.203537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.203843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.203854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.204216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.204226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.204528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.204538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.204726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.204736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.204929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.204939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.205254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.205266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.205569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.205902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.205913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.206248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.206259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.206453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.206463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.206796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.206808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.207129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.207141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.207330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.207340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.207558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.207568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.207901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.207912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.208262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.208275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.208615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.208625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.208819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.208829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.209165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.209177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.209577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.209588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.209917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.209928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.210275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.210286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.210519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.210529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.210849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.753 [2024-06-10 12:33:54.210859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.753 qpair failed and we were unable to recover it. 00:29:48.753 [2024-06-10 12:33:54.211058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.211068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.211264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.211275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.211604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.211614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.211981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.211991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.212315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.212326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.212674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.212684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.212875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.212885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.213054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.213065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.213378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.213388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.213600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.213609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.213931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.213941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.214125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.214135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.214431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.214442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.214795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.214806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.214996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.215007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.215306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.215317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.215642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.215652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.215977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.215988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.216171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.216181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.216534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.216545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.216725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.216736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.217087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.217100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.217441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.217451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.217645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.217655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.217985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.217995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.218351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.218362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.218700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.218711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.219033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.219043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.219373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.219384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.219434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.219443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.219738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.219747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.219952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.219962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.220304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.220314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.220661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.220672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.221020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.221031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.221356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.221367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.221696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.221707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.222031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.754 [2024-06-10 12:33:54.222042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.754 qpair failed and we were unable to recover it. 00:29:48.754 [2024-06-10 12:33:54.222387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.222397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.222590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.222600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.222778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.222789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.223130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.223141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.223460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.223471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.223799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.223811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.224136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.224146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.224335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.224347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.224596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.224606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.224927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.224938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.225257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.225268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.225438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.225448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.225787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.225797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.226121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.226132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.226457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.226467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.226798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.226809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.227108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.227120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.227447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.227458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.227784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.227795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.228120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.228132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.228445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.228456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.228647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.228657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.228985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.228997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.229182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.229198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.229417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.229428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.229757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.229768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.230091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.230102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.230277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.230289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.230645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.230656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.230980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.230991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.231297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.231307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.231651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.231662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.232011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.232021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.232214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.232225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.232499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.232509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.232838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.232849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.233199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.755 [2024-06-10 12:33:54.233210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.755 qpair failed and we were unable to recover it. 00:29:48.755 [2024-06-10 12:33:54.233331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.233340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.233643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.233653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.234000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.234011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.234202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.234214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.234559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.234569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.234894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.234904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.235244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.235255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.235571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.235582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.235905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.235915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.236239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.236250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.236437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.236447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.236833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.236844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.237157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.237168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.237534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.237545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.237777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.237789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.237948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.237959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.238252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.238263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.238590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.238600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.238922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.238934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.239128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.239137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.239345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.239356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.239700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.239711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.240040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.240052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.240398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.240410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.240732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.240742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.240973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.240983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.241299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.241309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.241500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.241511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.241704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.241715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.242062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.242072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.242408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.242419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.242606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.242615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.242805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.242816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 12:33:54.243146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.756 [2024-06-10 12:33:54.243157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.243357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.243367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.243413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.243423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.243705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.243715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.243904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.243914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.244257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.244268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.244609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.244619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.244669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.244679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.244975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.244988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.245332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.245343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.245689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.245700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.246026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.246038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.246236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.246247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.246437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.246447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.246725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.246735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.246903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.246915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.247261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.247273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.247466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.247475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.247807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.247817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.248181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.248192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.248418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.248428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.248624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.248634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.248951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.248962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.249273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.249285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.249340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.249350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.249623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.249633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.249999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.250009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.250201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.250212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.250511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.250522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.250723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.250734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.250914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.250925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.251212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.251224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.251603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.251613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.251804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.251814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.252114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.252125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.252326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.252337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.252638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.252649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.252844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.252854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 12:33:54.253192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.757 [2024-06-10 12:33:54.253207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.253544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.253554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.253891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.253903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.254248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.254259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.254591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.254603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.254931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.254942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.255136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.255146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.255316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.255326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.255503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.255513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.255848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.255859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.256185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.256201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.256514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.256525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.256819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.257004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.257014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.257190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.257205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.257501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.257512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.257709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.257720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.258039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.258050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.258398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.258409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.258756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.258766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.259089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.259101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.259434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.259446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.259717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.259728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.259956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.259966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.260153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.260163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.260346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.260359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.260656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.260666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.260975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.260985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.261389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.261400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.261705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.261716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.262051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.262062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.262410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.262421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.262740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.262750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.263072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.263082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.263238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.263249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.263594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.263605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.263795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.263806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.264087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.758 [2024-06-10 12:33:54.264097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.758 qpair failed and we were unable to recover it. 00:29:48.758 [2024-06-10 12:33:54.264498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.264510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.264743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.264753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.264946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.264958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.265133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.265144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.265436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.265449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.265616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.265627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.265972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.265983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.266315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.266325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.266681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.266692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.267040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.267051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.267375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.267387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.267728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.267738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.268065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.268076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.268417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.268428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.268794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.268805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.269129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.269141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.269551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.269562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.269760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.269770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.270090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.270100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.270430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.270442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.270791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.270802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.271148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.271159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.271348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.271358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.271573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.271584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.271898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.271908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.272142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.272153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.272342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.272353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.272666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.272678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.272869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.272879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.273073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.273084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.273277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.273288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.273523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.273533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.273859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.273871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.274229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.274240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.274436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.274446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.274774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.274784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.275142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.275153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.275502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.759 [2024-06-10 12:33:54.275513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.759 qpair failed and we were unable to recover it. 00:29:48.759 [2024-06-10 12:33:54.275838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.275848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.276173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.276184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.276498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.276509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.276856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.276866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.277064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.277075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.277370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.277381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.277691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.277703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.278048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.278059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.278374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.278386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.278707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.278718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.278915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.278926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.279237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.279248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.279438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.279448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.279781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.279792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.280118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.280128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.280526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.280536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.280727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.280742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.281043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.281055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.281110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.281120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.281411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.281422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.281773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.281784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.282107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.282119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.282510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.282522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.282848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.282859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.283207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.283218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.283558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.283569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.283892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.283903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.284243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.284254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.284627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.284637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.284965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.284977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.285300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.285311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.285506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.285517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.285695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.285705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.760 [2024-06-10 12:33:54.286042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.760 [2024-06-10 12:33:54.286052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.760 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.286241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.286252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.286592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.286603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.286791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.286801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.287118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.287129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.287322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.287332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.287579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.287590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.287944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.287956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.288145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.288156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.288491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.288503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.288827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.288838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.289028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.289039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.289226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.289237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.289439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.289450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.289494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.289502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.289820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.289829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.290180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.290191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.290511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.290522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.290865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.290875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.291173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.291183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.291565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.291576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.291764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.291774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.292052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.292063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.292408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.292419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.292766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.292778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.292971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.292981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.293272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.293284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.293609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.293619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.293963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.293974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.294316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.294328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.294659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.294670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.294869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.294878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.295233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.295244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.295436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.295446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.295659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.295669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.295971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.295983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.296332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.296342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.296678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.761 [2024-06-10 12:33:54.296689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.761 qpair failed and we were unable to recover it. 00:29:48.761 [2024-06-10 12:33:54.297050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.297061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.297407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.297418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.297609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.297620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.297778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.297787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.297955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.297965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.298252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.298263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.298604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.298615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.298962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.298973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.299170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.299180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.299377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.299388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.299595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.299605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.299906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.299918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.300260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.300271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.300598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.300610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.300935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.300947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.301137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.301147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.301487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.301499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.301728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.301738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.302063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.302074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.302266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.302276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.302618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.302628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.302821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.302831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.303187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.303203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.303418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.303428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.303760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.303770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.303963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.303975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.304141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.304151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.304463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.304474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.304658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.304669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.304968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.304978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.305324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.305336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.305657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.305668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.305840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.305850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.306047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.306059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.306365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.306376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.306704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.306714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.307067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.307078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.307423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.762 [2024-06-10 12:33:54.307434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.762 qpair failed and we were unable to recover it. 00:29:48.762 [2024-06-10 12:33:54.307756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.307766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.307961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.307971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.308015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.308026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.308224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.308235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.308450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.308461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.308795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.308806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.309179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.309190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.309514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.309525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.309864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.309875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.310205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.310217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.310441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.310451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.310640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.310650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.310988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.310999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.311313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.311325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.311674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.311685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.311886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.311897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.312235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.312246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.312575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.312587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.312767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.312777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.313123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.313134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.313369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.313381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.313699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.313709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.314050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.314061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.314251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.314262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.314565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.314576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.314904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.314914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.315111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.315123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.315323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.315334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.315667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.315678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.316005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.316017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.316357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.316368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.316726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.316737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.316923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.316933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.317238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.317249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.317477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.317487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.317688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.317699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.317883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.317894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.318190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.318207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.763 [2024-06-10 12:33:54.318523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.763 [2024-06-10 12:33:54.318534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.763 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.318888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.318899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.319223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.319235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.319528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.319539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.319735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.319745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.319950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.319961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.320269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.320279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.320617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.320628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.320930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.320942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.321238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.321249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.321575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.321587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.321901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.321912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.322239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.322250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.322570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.322581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.322905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.322916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.323236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.323247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.323417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.323427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.323729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.323740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.324153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.324163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.324498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.324509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.324858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.324868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.325214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.325225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.325551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.325561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.325853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.325863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.326203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.326215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.326625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.326635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.326877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.326887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.327200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.327210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:48.764 [2024-06-10 12:33:54.327595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.764 [2024-06-10 12:33:54.327605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:48.764 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.327922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.327936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.328120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.328133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.328440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.328451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.328637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.328649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.328959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.328970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.329188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.329205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.329533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.329543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.329867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.329878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.330227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.330238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.330529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.330541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.330712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.330722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.331067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.331079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.331425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.331436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.331621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.331631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.331918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.331929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.332236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.332246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.332438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.332448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.332717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.332728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.333053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.333063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.333414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.333426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.333733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.333744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.334094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.334105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.334452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.334463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.040 qpair failed and we were unable to recover it. 00:29:49.040 [2024-06-10 12:33:54.334773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.040 [2024-06-10 12:33:54.334783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.334978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.334988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.335335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.335346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.335669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.335680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.336028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.336039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.336229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.336239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.336583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.336594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.336916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.336929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.336978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.336986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.337271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.337282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.337511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.337521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.337572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.337580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.337868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.337879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.338075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.338085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.338420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.338431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.338729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.338739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.339064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.339076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.339374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.339385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.339730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.339741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.339940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.339951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.340245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.340257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.340585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.340596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.340925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.340935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.341286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.341298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.341483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.341494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.341835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.341847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.342164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.342174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.342345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.342355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.342688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.342699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.343023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.343035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.343362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.343372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.343589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.343599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.343782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.343793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.343982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.343992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.344333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.344347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.344534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.344544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.344859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.344870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.345056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.041 [2024-06-10 12:33:54.345066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.041 qpair failed and we were unable to recover it. 00:29:49.041 [2024-06-10 12:33:54.345375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.345386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.345619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.345630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.345958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.345969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.346135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.346145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.346457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.346467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.346814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.346824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.347146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.347158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.347496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.347506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.347651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.347661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.347957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.347967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.348136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.348146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.348438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.348450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.348633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.348643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.348951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.348962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.349153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.349163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.349413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.349425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.349751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.349763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.349998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.350008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.350344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.350355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.350537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.350547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.350744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.350755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.351084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.351095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.351148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.351157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.351425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.351435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.351670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.351680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.351869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.351881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.352190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.352207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.352549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.352560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.352895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.352907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.353239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.353250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.353571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.353584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.353928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.353939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.354144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.354155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.354346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.354356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.354525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.354535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.354857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.354868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.355210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.355222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.355573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.355585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.042 qpair failed and we were unable to recover it. 00:29:49.042 [2024-06-10 12:33:54.355895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.042 [2024-06-10 12:33:54.355906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.356096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.356105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.356395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.356406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.356730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.356741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.356871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.356881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.357218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.357230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.357330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.357339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.357563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.357574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.357920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.357930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.358128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.358138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.358319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.358331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.358680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.358691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.359030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.359041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.359235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.359245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.359571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.359582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.359772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.359782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.359971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.359981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.360302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.360314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.360656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.360666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.360990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.361002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.361333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.361344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.361660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.361670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.361995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.362006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.362321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.362331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.362634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.362646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.362995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.363005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.363330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.363344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.363685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.363696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.364038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.364049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.364236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.364247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.364581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.364591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.364957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.364968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.365291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.365303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.365496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.365507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.365685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.365694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.365884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.365894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.366238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.366249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.366643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.366653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.043 [2024-06-10 12:33:54.366989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.043 [2024-06-10 12:33:54.366999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.043 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.367277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.367287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.367661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.367672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.368005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.368016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.368308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.368318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.368644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.368656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.368980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.368990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.369296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.369308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.369485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.369496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.369830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.369841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.370169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.370180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.370374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.370385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.370561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.370573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.370801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.370811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.371143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.371153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.371342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.371353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.371688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.371700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.372030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.372041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.372241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.372252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.372616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.372626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.372935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.372946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.373133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.373143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.373471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.373482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.373782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.373794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.373984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.373995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.374192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.374208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.374524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.374535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.374883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.374893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.375225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.375239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.375429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.375441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.375620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.375631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.375922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.375933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.376255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.376266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.376594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.044 [2024-06-10 12:33:54.376604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.044 qpair failed and we were unable to recover it. 00:29:49.044 [2024-06-10 12:33:54.376943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.376953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.377303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.377313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.377650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.377660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.377847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.377857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.378041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.378053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.378245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.378255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.378584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.378594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.378796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.378806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.379146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.379156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.379380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.379391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.379594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.379614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.379963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.379973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.380189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.380207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.380560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.380571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.380928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.380939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.381123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.381134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.381430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.381441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.381794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.381805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.381995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.382007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.382331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.382342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.382676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.382688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.383042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.383053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.383482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.383492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.383826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.383836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.384026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.384036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.384359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.384370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.384558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.384568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.384858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.384868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.385203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.385213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.385271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.385280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.385483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.385493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.385829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.385839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.386168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.386179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.386575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.386586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.386944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.386954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.387292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.387302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.387650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.045 [2024-06-10 12:33:54.387661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.045 qpair failed and we were unable to recover it. 00:29:49.045 [2024-06-10 12:33:54.387985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.387996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.388344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.388354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.388654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.388664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.388991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.389002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.389192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.389208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.389407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.389418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.389645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.389655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.389846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.389857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.390202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.390213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.390538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.390549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.390877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.390887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.391233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.391243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.391421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.391433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.391792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.391803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.392125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.392135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.392372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.392382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.392674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.392685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.393041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.393052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.393375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.393385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.393575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.393586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.393909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.393920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.394269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.394279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.394638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.394649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.394970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.394980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.395177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.395187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.395366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.395376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.395749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.395759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.396089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.396100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.396478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.396489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.396833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.396844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.397159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.397170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.397497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.397508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.397699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.397709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.398019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.398031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.398348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.398359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.398672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.398682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.398852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.046 [2024-06-10 12:33:54.398862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.046 qpair failed and we were unable to recover it. 00:29:49.046 [2024-06-10 12:33:54.399046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.399056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.399335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.399346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.399670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.399684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.400007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.400018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.400354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.400366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.400687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.400699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.400885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.400896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.401227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.401239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.401577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.401588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.401950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.401960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.402303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.402314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.402624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.402635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.402984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.402995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.403180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.403190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.403404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.403416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.403694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.403705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.403895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.403906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.403956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.403966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.404133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.404144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.404460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.404472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.404798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.404808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.405164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.405176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.405505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.405517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.405665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.405676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.406008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.406020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.406329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.406340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.406683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.406694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.407002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.407014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.407340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.407351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.407703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.407716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.407761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.407770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.408080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.408090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.408413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.408423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.408740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.408751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.409099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.409110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.409408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.409419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.409748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.409759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.410115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.410126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.410483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.047 [2024-06-10 12:33:54.410494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.047 qpair failed and we were unable to recover it. 00:29:49.047 [2024-06-10 12:33:54.410836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.410847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.411171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.411181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.411505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.411516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.411689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.411700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.411868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.411878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.412224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.412235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.412437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.412448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.412756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.412767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.413091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.413101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.413368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.413379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.413543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.413554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.413882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.413892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.414221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.414232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.414595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.414607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.414933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.414944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.415292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.415303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.415635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.415647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.415704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.415714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.415941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.415951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.416320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.416331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.416687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.416699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.417027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.417039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.417355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.417366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.417680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.417691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.418037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.418048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.418238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.418249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.418435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.418447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.418775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.418785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.419105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.419117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.419432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.419443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.419641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.419652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.419976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.419989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.420132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.420405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.420417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.420716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.420726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.420927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.420937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.421234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.421245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.421581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.421593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.048 [2024-06-10 12:33:54.421915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.048 [2024-06-10 12:33:54.421925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.048 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.422114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.422127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.422334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.422345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.422676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.422687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.423010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.423022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.423240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.423250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.423562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.423573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.423866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.423877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.424203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.424214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.424526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.424536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.424848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.424860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.425050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.425061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.425348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.425359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.425522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.425533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.425883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.425894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.426229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.426239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.426579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.426591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.426914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.426925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.427131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.427142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.427490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.427501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.427830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.427843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.427895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.427903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.428205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.428216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.428521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.428531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.428853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.428864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.428921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.429237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.429248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.429577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.429588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.429936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.429948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.430276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.430286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.430618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.430630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.049 [2024-06-10 12:33:54.431006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.049 [2024-06-10 12:33:54.431016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.049 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.431344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.431354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.431512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.431523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.431842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.431852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.432260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.432271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.432712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.432722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.432905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.432915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.433091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.433102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.433421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.433431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.433759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.433769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.434070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.434080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.434451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.434461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.434775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.434786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.434835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.434843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.435178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.435188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.435383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.435394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.435575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.435587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.435919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.435930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.436276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.436288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.436637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.436648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.436993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.437004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.437333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.437345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.437535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.437546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.437866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.437877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.438277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.438288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.438616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.438627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.438946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.438957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.439384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.439397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.439709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.439721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.439947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.439957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.440362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.440373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.440503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.440512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.440823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.440833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.441098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.441108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.441485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.441496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.441625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.441635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.441963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.441973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.442165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.442174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.442361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.050 [2024-06-10 12:33:54.442373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.050 qpair failed and we were unable to recover it. 00:29:49.050 [2024-06-10 12:33:54.442551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.442561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.442871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.442882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.443073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.443084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.443306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.443318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.443608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.443619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.443904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.443915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.444280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.444291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.444583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.444593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.444895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.444905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.445094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.445104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.445302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.445313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.445606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.445616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.445788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.445798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.446126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.446137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.446445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.446457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.446782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.446793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.446985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.446995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.447324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.447336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.447660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.447672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.447994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.448005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.448362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.448372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.448701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.448712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.449033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.449045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.449376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.449387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.449736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.449746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.450070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.450080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.450449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.450460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.450744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.450754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.450976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.450987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.451400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.451715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.451726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.451909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.451919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.452230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.452241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.452298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.452308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.452578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.452589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.452918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.452929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.453257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.453268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.453588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.453598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.453923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.051 [2024-06-10 12:33:54.453934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.051 qpair failed and we were unable to recover it. 00:29:49.051 [2024-06-10 12:33:54.454262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.454273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.454593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.454604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.454955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.454966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.455162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.455173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.455526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.455538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.455735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.455746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.455981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.455995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.456381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.456393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.456587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.456600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.456899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.456911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.457103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.457114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.457400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.457411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.457733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.457744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.458069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.458081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.458424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.458435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.458783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.458794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.458984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.458994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.459180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.459190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.459515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.459526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.459839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.459849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.460182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.460193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.460535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.460547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.460896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.460906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.461230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.461242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.461512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.461522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.461715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.461724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.462046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.462057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.462257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.462267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.462538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.462549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.462738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.462747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.463100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.463111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.463424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.463434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.463753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.463764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.464080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.464094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.464415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.464425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.464617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.464629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.464922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.464933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.465261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.465274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.052 [2024-06-10 12:33:54.465480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.052 [2024-06-10 12:33:54.465491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.052 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.465823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.465833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.466070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.466080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.466413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.466424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.466773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.466784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.467108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.467119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.467305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.467315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.467519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.467529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.467781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.467793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.468127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.468138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.468462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.468473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.468658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.468669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.468871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.468881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.469177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.469187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.469357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.469366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.469641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.469651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.469978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.469988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.470348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.470359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.470701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.470711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.471074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.471085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.471271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.471281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.471631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.471641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.471968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.471979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.472297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.472308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.472653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.472664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.473023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.473033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.473390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.473402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.473729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.473740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.474079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.474089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.474414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.474425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.474598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.474608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.474820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.474831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.475148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.475159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.475506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.475517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.475696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.475705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.476035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.476047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.476373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.476384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.476623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.476633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.476956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.476966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.053 qpair failed and we were unable to recover it. 00:29:49.053 [2024-06-10 12:33:54.477286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.053 [2024-06-10 12:33:54.477298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.477643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.477653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.478006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.478017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.478343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.478355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.478699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.478709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.478896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.478906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.479214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.479225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.479454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.479465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.479741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.479751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.480080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.480091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.480425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.480436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.480780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.480791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.481065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.481076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.481424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.481434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.481789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.482113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.482125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.482361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.482372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.482646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.482657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.482707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.482717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.483022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.483032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.483356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.483367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.483714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.483724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.484061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.484072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.484413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.484425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.484613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.484625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.484898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.484909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.485076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.485086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.485421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.485432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.485775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.485785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.485971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.485981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.486277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.486288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.486616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.486628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.486831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.486841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.054 [2024-06-10 12:33:54.487055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.054 [2024-06-10 12:33:54.487065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.054 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.487257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.487269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.487590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.487600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.487915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.487926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.488250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.488261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.488628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.488974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.488985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.489308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.489318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.489643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.489654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.489845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.489855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.490029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.490038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.490337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.490348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.490642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.490653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.490971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.490981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.491206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.491217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.491532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.491542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.491890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.491900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.492227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.492238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.492566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.492578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.492902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.492912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.493236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.493247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.493596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.493607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.493797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.493807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.494075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.494086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.494409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.494422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.494599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.494610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.494911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.494921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.495267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.495278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.495598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.495610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.495927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.495938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.496259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.496269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.496597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.496607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.496942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.496953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.497338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.497349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.497710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.497721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.498069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.498081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.498267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.498277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.498459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.498469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.055 [2024-06-10 12:33:54.498658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.055 [2024-06-10 12:33:54.498668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.055 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.498995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.499005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.499204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.499215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.499554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.499564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.499920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.499931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.500278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.500290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.500633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.500643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.500838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.500850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.501049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.501061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.501377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.501388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.501724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.501736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.502038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.502048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.502241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.502252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.502424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.502434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.502763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.502773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.503095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.503107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.503292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.503303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.503628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.503639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.503988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.503998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.504331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.504342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.504701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.504713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.505042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.505054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.505376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.505386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.505583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.505593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.505809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.505819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.505993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.506003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.506205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.506217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.506535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.506547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.506761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.506771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.507023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.507034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.507359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.507371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.507767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.507777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.508142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.508153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.508477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.508489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.508863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.508874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.509193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.509208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.509516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.509526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.509852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.509862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.510153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.510165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.056 qpair failed and we were unable to recover it. 00:29:49.056 [2024-06-10 12:33:54.510483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.056 [2024-06-10 12:33:54.510494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.510783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.510795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.511142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.511154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.511339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.511350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.511651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.511661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.511961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.511972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.512314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.512324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.512526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.512536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.512838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.512849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.513045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.513055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.513260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.513273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.513588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.513599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.513786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.513797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.513985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.513996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.514192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.514206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.514617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.514628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.514814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.514824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.515124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.515316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.515327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.515628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.515638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.515988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.515999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.516314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.516324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.516644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.516655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.516984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.516995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.517183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.517199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.517252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.517261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.517535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.517546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.517871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.517882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.518206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.518217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.518588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.518599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.518942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.518953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.519185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.519202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.519381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.519391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.519685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.519696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.520051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.520062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.520425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.520436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.520625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.520639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.520950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.520960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.057 [2024-06-10 12:33:54.521150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.057 [2024-06-10 12:33:54.521160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.057 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.521477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.521488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.521822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.521833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.522041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.522052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.522102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.522112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.522393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.522404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.522772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.522783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.523081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.523094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.523387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.523398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.523573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.523585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.523926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.523938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.524265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.524276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.524479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.524490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.524789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.524799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.525123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.525133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.525464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.525475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.525675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.525687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.525989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.526002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.526330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.526341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.526548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.526559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.526902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.526912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.527269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.527281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.527609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.527619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.527958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.527968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.528290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.528301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.528488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.528502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.528808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.528820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.528865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.528875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.529189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.529204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.529533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.529544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.529857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.529868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.530202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.530213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.530438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.530449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.530820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.058 [2024-06-10 12:33:54.530830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.058 qpair failed and we were unable to recover it. 00:29:49.058 [2024-06-10 12:33:54.531154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.531165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.531489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.531500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.531691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.531702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.532020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.532031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.532394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.532406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.532582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.532593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.532788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.532799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.532972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.532982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.533303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.533314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.533643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.533653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.533845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.533856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.534202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.534214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.534530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.534541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.534871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.534882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.535066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.535077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.535419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.535430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.535617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.535629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.535908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.535920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.536264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.536275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.536595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.536607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.536958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.536968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.537161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.537171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.537465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.537476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.537669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.537679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.537954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.537965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.538290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.538303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.538690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.538700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.539058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.539069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.539425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.539437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.539760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.539771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.540096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.540107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.540310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.540321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.540604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.540614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.540939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.540949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.541276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.541287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.541644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.541655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.541981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.541991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.542210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.542221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.059 [2024-06-10 12:33:54.542538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.059 [2024-06-10 12:33:54.542549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.059 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.542910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.542921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.543112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.543123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.543410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.543420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.543744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.543754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.543939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.543949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.544324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.544335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.544659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.544670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.544857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.544869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.545067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.545079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.545327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.545338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.545651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.545662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.545874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.545884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.546167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.546177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.546513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.546525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.546840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.546851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.547187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.547203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.547548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.547559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.547881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.547892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.548212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.548223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.548411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.548420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.548777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.548791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.549003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.549014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.549203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.549215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.549418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.549428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.549752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.549764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.549952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.549963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.550288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.550300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.550587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.550597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.550827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.550837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.551155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.551166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.551494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.551505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.551767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.551778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.551972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.551984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.552308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.552319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.552501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.552512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.552845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.552856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.553211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.553223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.553549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.060 [2024-06-10 12:33:54.553561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.060 qpair failed and we were unable to recover it. 00:29:49.060 [2024-06-10 12:33:54.553923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.553935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.554091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.554101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.554412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.554424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.554609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.554619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.554667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.554677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.554982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.554992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.555323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.555334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.555684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.555694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.556013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.556025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.556213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.556226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.556544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.556743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.556754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.557097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.557108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.557301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.557312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.557497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.557507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.557713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.557723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.558022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.558034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.558374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.558385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.558729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.558740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.558969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.558979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.559152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.559163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.559445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.559456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.559670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.559681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.560007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.560018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.560371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.560381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.560567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.560577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.560925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.560936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.561131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.561141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.561384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.561396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.561587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.561597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.561964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.561974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.562166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.562177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.562370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.562381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.562695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.562705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.562891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.562902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.563191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.563206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.563510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.563524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.563713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.563723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.061 [2024-06-10 12:33:54.563930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.061 [2024-06-10 12:33:54.563940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.061 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.564294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.564305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.564627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.564639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.564970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.564980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.565313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.565326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.565654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.565664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.565973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.565984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.566311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.566323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.566681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.566692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.566893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.566904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.567220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.567230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.567578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.567588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.567911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.567923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.568290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.568302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.568500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.568510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.568837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.568848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.569171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.569181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.569522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.569534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.569833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.569844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.570208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.570220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.570567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.570577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.570902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.570913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.571101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.571111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.571450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.571460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.571650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.571660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.571986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.571996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.572248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.572259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.572625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.572636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.572971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.572981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.573306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.573317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.573674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.573684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.573912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.573921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.574266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.574276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.574630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.574642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.574953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.574964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.575297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.575308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.575771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.575781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.576118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.576129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.576412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.062 [2024-06-10 12:33:54.576423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.062 qpair failed and we were unable to recover it. 00:29:49.062 [2024-06-10 12:33:54.576619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.576629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.576955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.576966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.577298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.577309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.577659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.577670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.577868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.577879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.578164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.578175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.578502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.578513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.578829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.578841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.579205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.579216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.579558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.579568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.579897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.579907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.580262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.580274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.580467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.580477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.580677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.580687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.580876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.580887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.581179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.581189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.581563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.581575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.581900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.581912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.582251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.582261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.582514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.582524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.582715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.582726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.583028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.583038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.583393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.583404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.583754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.583765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.584121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.584132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.584459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.584470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.584796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.584806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.585157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.585170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.585414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.585426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.585616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.585627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.585969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.585980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.586329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.586340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.586694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.586705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.587051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.063 [2024-06-10 12:33:54.587062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.063 qpair failed and we were unable to recover it. 00:29:49.063 [2024-06-10 12:33:54.587254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.587264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.587584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.587595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.587944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.587955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.588157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.588167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.588384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.588394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.588592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.588604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:49.064 [2024-06-10 12:33:54.588939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.588953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.589038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.589047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:49.064 [2024-06-10 12:33:54.589344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.589355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.064 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:49.064 [2024-06-10 12:33:54.589745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.589756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.064 [2024-06-10 12:33:54.590061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.590073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.590395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.590406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.590732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.590743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.590795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.590803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.591110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.591120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.591531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.591542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.591866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.591877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.592205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.592215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.592416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.592429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.592609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.592619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.592848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.592859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.593208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.593219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.593570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.593581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.593872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.593883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.594214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.594226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.594555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.594566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.594891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.594902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.595103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.595114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.595442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.595454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.595780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.595791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.596107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.596119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.596443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.596454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.596779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.596790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.597105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.597117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.597445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.597456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.597650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.597660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.064 qpair failed and we were unable to recover it. 00:29:49.064 [2024-06-10 12:33:54.597966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.064 [2024-06-10 12:33:54.597977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.598343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.598355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.598508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.598519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.598880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.598890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.599220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.599231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.599567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.599578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.599908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.599918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.600242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.600254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.600605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.600615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.600952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.600964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.601279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.601290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.601475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.601485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.601820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.601831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.602027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.602038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.602384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.602395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.602584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.602595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.602938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.602950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.603260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.603270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.603621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.603633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.603926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.603937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.604126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.604137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.604313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.604324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.604496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.604505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.604700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.604711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.604900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.604910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.605073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.605083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.605279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.605291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.605615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.605626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.605817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.605829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.606034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.606046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.606260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.606272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.606455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.606465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.606738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.606749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.607086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.607097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.607402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.607413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.607764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.607775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.607958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.607968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.608141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.608151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.608354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.065 [2024-06-10 12:33:54.608365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.065 qpair failed and we were unable to recover it. 00:29:49.065 [2024-06-10 12:33:54.608553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.608564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.608796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.608808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.609122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.609133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.609438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.609449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.609749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.609761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.610060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.610071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.610351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.610362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.610698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.610709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.611054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.611064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.611377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.611388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.611584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.611595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.611762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.611774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.612095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.612107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.612427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.612438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.612767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.612778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.613093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.613104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.613451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.613461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.613696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.613708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.614031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.614043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.614283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.614294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.614469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.614479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.614656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.614666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.614974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.614984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.615175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.615185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.615420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.615431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.615755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.615766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.616087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.616099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.616278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.616289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.616593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.616603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.616949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.616961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.617368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.617379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.617683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.617695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.617861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.617871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.618109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.618119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.618452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.618463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.618795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.618806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.619149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.619159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.619485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.619496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.619816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.066 [2024-06-10 12:33:54.619829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.066 qpair failed and we were unable to recover it. 00:29:49.066 [2024-06-10 12:33:54.620128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.620139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.620483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.620495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.620819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.620830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.621158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.621169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.621493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.621505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.621693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.621702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.622014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.622025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.622337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.622348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.622666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.622677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.622868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.622881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.623076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.623087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.623384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.623395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.623720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.623732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.624079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.624090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.624434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.624446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.624538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.624549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.624795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.624806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.625157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.625169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.625509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.625863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.625874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.626205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.626216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.626548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.626559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.626872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.626884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.627201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.627213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.627530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.627540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.627877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.627888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.628115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.628128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.628338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.628349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.628681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.628693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.628892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.628903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.629105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.629117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.629432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.629443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.067 [2024-06-10 12:33:54.629638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.067 [2024-06-10 12:33:54.629651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.067 qpair failed and we were unable to recover it. 00:29:49.334 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.334 [2024-06-10 12:33:54.629863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.629875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.630216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.334 [2024-06-10 12:33:54.630561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.630572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.334 [2024-06-10 12:33:54.630898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.630910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.334 [2024-06-10 12:33:54.631259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.631271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.631468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.631480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.631670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.631681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.632016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.632028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.632218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.632229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.632566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.632578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.632627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.632638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.632971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.632982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.633335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.633347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.633699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.633710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.633900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.633911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.634190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.634205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.634510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.634521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.634881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.634891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.635222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.635234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.635452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.635462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.635787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.635797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.636148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.636159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.636488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.636499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.636821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.636832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.637169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.637181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.637528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.637539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.637880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.637891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.638209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.638220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.638545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.638556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.638789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.638799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.639127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.639137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.639478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.639490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.639825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.639841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.640031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.640042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.334 [2024-06-10 12:33:54.640353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.334 [2024-06-10 12:33:54.640365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.334 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.640660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.640670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.640858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.640867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.641041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.641051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.641373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.641384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.641702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.641714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.642053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.642065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.642390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.642402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.642593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.642604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.642901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.642911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.643221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.643232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.643552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.643564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.643761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.643772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.644000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.644010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.644363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.644375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.644628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.644639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.644951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.644961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.645173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.645183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.645499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.645510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.645862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.645873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.646166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.646176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.646503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.646514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.646840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.646851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.647205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.647215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.647559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.647569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.647908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.647921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 Malloc0 00:29:49.335 [2024-06-10 12:33:54.648099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.648109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.648443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.648455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.648782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.648793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.335 [2024-06-10 12:33:54.649054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.649065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.649172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.649183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:49.335 [2024-06-10 12:33:54.649477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.649488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.335 [2024-06-10 12:33:54.649718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.649730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.335 [2024-06-10 12:33:54.649944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.649955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.650273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.650284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.650598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.650608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.335 [2024-06-10 12:33:54.650933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.335 [2024-06-10 12:33:54.650944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.335 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.651159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.651170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.651363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.651374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.651674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.651684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.651991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.652001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.652334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.652345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.652541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.652552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.652905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.652916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.653306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.653317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.653500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.653509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.653843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.653854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.654087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.654097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.654289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.654299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.654508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.654518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.654877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.654887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.655081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.655093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.655425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.655435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.655613] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.336 [2024-06-10 12:33:54.655759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.655769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.655952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.655963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.656279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.656290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.656650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.656661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.656989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.657001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.657329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.657340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.657523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.657533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.657729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.657740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.658048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.658058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.658379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.658390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.658741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.658751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.659080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.659092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.659439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.659450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.659779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.659790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.659995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.660005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.660331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.660342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.660508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.660517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.660855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.660866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.661074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.661083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.661421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.661432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.661755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.661767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.661905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.336 [2024-06-10 12:33:54.661914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.336 qpair failed and we were unable to recover it. 00:29:49.336 [2024-06-10 12:33:54.662233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.662244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.662431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.662442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.662662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.662673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.663013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.663025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.663398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.663409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.663732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.663742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.664067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.664077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.664431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.664442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.337 [2024-06-10 12:33:54.664637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.664648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.664958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.664968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.337 [2024-06-10 12:33:54.665160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.665171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.337 [2024-06-10 12:33:54.665362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.665374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.337 [2024-06-10 12:33:54.665569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.665579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.665895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.665906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.666312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.666325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.666660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.666670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.666876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.666886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.667220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.667231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.667579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.667590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.667851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.667861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.668223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.668235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.668558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.668569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.668903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.668914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.669239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.669250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.669554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.669565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.669909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.669919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.670198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.670210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.670535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.670546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.670729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.670740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.670794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.670803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.671123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.671133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.671540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.671551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.671878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.671889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.672220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.672230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.672546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.337 [2024-06-10 12:33:54.672555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.337 qpair failed and we were unable to recover it. 00:29:49.337 [2024-06-10 12:33:54.672880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.672890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.673212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.673223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.673417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.673428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.673599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.673610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.673935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.673945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.674172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.674183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.674381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.674392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.674722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.674734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.675094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.675104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.675278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.675288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.675577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.675588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.675719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.675729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.675946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.675956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.676262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.676273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.676486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.676496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.338 [2024-06-10 12:33:54.676816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.676827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.677017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.677026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.338 [2024-06-10 12:33:54.677334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.677346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.338 [2024-06-10 12:33:54.677531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.677544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.338 [2024-06-10 12:33:54.677877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.677888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.677945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.677955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.678280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.678292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.678634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.678645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.678837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.678847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.679044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.679054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.679385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.679396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.679447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.679455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.679770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.338 [2024-06-10 12:33:54.679780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.338 qpair failed and we were unable to recover it. 00:29:49.338 [2024-06-10 12:33:54.680015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.680025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.680345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.680357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.680693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.680703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.681029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.681039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.681369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.681381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.681542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.681553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.681829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.681839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.682162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.682172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.682456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.682467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.682802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.682813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.683135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.683146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.683368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.683378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.683583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.683593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.683946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.683956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.684275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.684286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.684635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.684646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.684977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.684988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.685180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.685191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.685564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.685576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.685916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.685927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.686251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.686261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.686620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.686631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.686818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.686829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.687155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.687167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.687498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.687508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.687841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.687852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.688035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.688045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.688423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.688434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.339 [2024-06-10 12:33:54.688761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.688772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.339 [2024-06-10 12:33:54.689114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.689125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.339 [2024-06-10 12:33:54.689450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.689462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.339 [2024-06-10 12:33:54.689788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.689799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.690179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.690189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.339 [2024-06-10 12:33:54.690383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.339 [2024-06-10 12:33:54.690393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.339 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.690446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.690456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.690803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.690813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.691182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.691192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.691415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.691425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.691746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.691757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.692082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.692093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.692408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.692418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.692743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.692753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.693108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.693118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.693450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.693463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.693652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.693663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.694017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.694028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.694356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.694367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.694669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.694680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.694984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.694994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.695187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.695201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.695615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.340 [2024-06-10 12:33:54.695626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d8c0 with addr=10.0.0.2, port=4420 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.695891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.340 [2024-06-10 12:33:54.706465] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.706547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.706568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.706576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.706583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.706602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.340 12:33:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 853359 00:29:49.340 [2024-06-10 12:33:54.716456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.716524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.716541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.716548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.716555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.716571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.726439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.726498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.726515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.726522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.726528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.726542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.736405] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.736473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.736489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.736497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.736503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.736517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.746435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.746523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.746540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.746547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.746554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.746568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.756455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.756510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.756531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.756538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.340 [2024-06-10 12:33:54.756544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.340 [2024-06-10 12:33:54.756559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.340 qpair failed and we were unable to recover it. 00:29:49.340 [2024-06-10 12:33:54.766356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.340 [2024-06-10 12:33:54.766414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.340 [2024-06-10 12:33:54.766430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.340 [2024-06-10 12:33:54.766437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.766444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.766458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.776456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.776549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.776567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.776577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.776583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.776599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.786398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.786467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.786484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.786491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.786498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.786512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.796541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.796599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.796615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.796622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.796628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.796645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.806554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.806611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.806627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.806634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.806640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.806654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.816571] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.816667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.816683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.816690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.816696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.816710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.826647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.826719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.826735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.826743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.826752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.826767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.836677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.836737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.836753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.836761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.836767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.836781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.846704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.846761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.846781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.846788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.846794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.846807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.856582] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.856681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.856698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.856705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.856711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.856724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.866747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.866813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.866829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.866836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.866842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.866855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.876758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.876812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.876828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.876835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.876842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.876855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.886728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.886829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.886846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.886854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.886863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.886877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.896697] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.341 [2024-06-10 12:33:54.896756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.341 [2024-06-10 12:33:54.896772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.341 [2024-06-10 12:33:54.896779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.341 [2024-06-10 12:33:54.896785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.341 [2024-06-10 12:33:54.896799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.341 qpair failed and we were unable to recover it. 00:29:49.341 [2024-06-10 12:33:54.906864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.342 [2024-06-10 12:33:54.906923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.342 [2024-06-10 12:33:54.906940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.342 [2024-06-10 12:33:54.906947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.342 [2024-06-10 12:33:54.906953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.342 [2024-06-10 12:33:54.906966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.342 qpair failed and we were unable to recover it. 00:29:49.342 [2024-06-10 12:33:54.916893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.342 [2024-06-10 12:33:54.916955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.342 [2024-06-10 12:33:54.916970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.342 [2024-06-10 12:33:54.916978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.342 [2024-06-10 12:33:54.916984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.342 [2024-06-10 12:33:54.916997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.342 qpair failed and we were unable to recover it. 00:29:49.342 [2024-06-10 12:33:54.926934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.342 [2024-06-10 12:33:54.926993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.342 [2024-06-10 12:33:54.927009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.342 [2024-06-10 12:33:54.927016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.342 [2024-06-10 12:33:54.927023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.342 [2024-06-10 12:33:54.927037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.342 qpair failed and we were unable to recover it. 00:29:49.604 [2024-06-10 12:33:54.936928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.936988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.937004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.937011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.937017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.937031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.946964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.947069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.947085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.947092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.947098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.947111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.957002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.957055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.957071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.957078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.957084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.957098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.967030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.967088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.967103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.967110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.967116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.967130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.977135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.977221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.977237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.977244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.977258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.977272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.987140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.987205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.987221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.987228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.987235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.987249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:54.997168] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:54.997233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:54.997249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:54.997256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:54.997262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:54.997277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.007188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.007248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.007264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.007272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.007279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.007293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.017152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.017215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.017231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.017239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.017245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.017259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.027198] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.027262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.027278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.027285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.027292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.027305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.037215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.037273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.037288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.037296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.037302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.037315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.047303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.047357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.047373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.047380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.047386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.047400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.057242] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.057305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.605 [2024-06-10 12:33:55.057321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.605 [2024-06-10 12:33:55.057328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.605 [2024-06-10 12:33:55.057334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.605 [2024-06-10 12:33:55.057348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.605 qpair failed and we were unable to recover it. 00:29:49.605 [2024-06-10 12:33:55.067289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.605 [2024-06-10 12:33:55.067351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.067367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.067374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.067384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.067398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.077225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.077288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.077303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.077310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.077317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.077331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.087338] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.087397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.087414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.087421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.087428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.087442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.097319] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.097383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.097399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.097406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.097412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.097426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.107430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.107508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.107524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.107531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.107537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.107551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.117431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.117485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.117501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.117509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.117515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.117528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.127459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.127564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.127581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.127588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.127594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.127607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.137452] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.137534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.137550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.137557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.137564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.137577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.147481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.147546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.147561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.147568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.147575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.147588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.157560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.157651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.157667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.157677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.157684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.157697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.167560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.167618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.167634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.167641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.167648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.167661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.177602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.177659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.177674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.177681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.177688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.177701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.187602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.187665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.187680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.187688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.187694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.606 [2024-06-10 12:33:55.187707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.606 qpair failed and we were unable to recover it. 00:29:49.606 [2024-06-10 12:33:55.197640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.606 [2024-06-10 12:33:55.197698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.606 [2024-06-10 12:33:55.197713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.606 [2024-06-10 12:33:55.197721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.606 [2024-06-10 12:33:55.197727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.607 [2024-06-10 12:33:55.197740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.607 qpair failed and we were unable to recover it. 00:29:49.607 [2024-06-10 12:33:55.207676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.607 [2024-06-10 12:33:55.207733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.607 [2024-06-10 12:33:55.207749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.607 [2024-06-10 12:33:55.207756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.607 [2024-06-10 12:33:55.207762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.607 [2024-06-10 12:33:55.207774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.607 qpair failed and we were unable to recover it. 00:29:49.869 [2024-06-10 12:33:55.217699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.869 [2024-06-10 12:33:55.217759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.869 [2024-06-10 12:33:55.217775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.869 [2024-06-10 12:33:55.217781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.869 [2024-06-10 12:33:55.217788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.869 [2024-06-10 12:33:55.217801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.869 qpair failed and we were unable to recover it. 00:29:49.869 [2024-06-10 12:33:55.227742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.869 [2024-06-10 12:33:55.227801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.869 [2024-06-10 12:33:55.227817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.869 [2024-06-10 12:33:55.227824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.869 [2024-06-10 12:33:55.227830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.869 [2024-06-10 12:33:55.227844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.869 qpair failed and we were unable to recover it. 00:29:49.869 [2024-06-10 12:33:55.237749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.869 [2024-06-10 12:33:55.237834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.869 [2024-06-10 12:33:55.237850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.869 [2024-06-10 12:33:55.237857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.869 [2024-06-10 12:33:55.237864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.869 [2024-06-10 12:33:55.237877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.869 qpair failed and we were unable to recover it. 00:29:49.869 [2024-06-10 12:33:55.247787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.869 [2024-06-10 12:33:55.247844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.869 [2024-06-10 12:33:55.247860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.869 [2024-06-10 12:33:55.247870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.869 [2024-06-10 12:33:55.247877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.869 [2024-06-10 12:33:55.247890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.869 qpair failed and we were unable to recover it. 00:29:49.869 [2024-06-10 12:33:55.257811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.869 [2024-06-10 12:33:55.257869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.869 [2024-06-10 12:33:55.257884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.869 [2024-06-10 12:33:55.257891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.869 [2024-06-10 12:33:55.257897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.869 [2024-06-10 12:33:55.257911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.869 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.267839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.267906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.267924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.267931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.267937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.267951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.277868] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.277927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.277943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.277951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.277957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.277970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.287891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.287949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.287966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.287973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.287979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.287992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.297940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.298029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.298054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.298062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.298069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.298087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.307839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.307904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.307922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.307929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.307936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.307951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.317865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.317925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.317941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.317949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.317955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.317969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.328000] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.328056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.328073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.328080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.328087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.328101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.338036] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.338091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.338107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.338118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.338125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.338139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.348046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.348104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.348120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.348127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.348133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.348147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.357963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.358030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.358046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.358053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.358060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.358074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.367997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.368067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.368083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.368090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.368097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.368110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.378127] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.378189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.378209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.378216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.378222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.378236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.388139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.388204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.870 [2024-06-10 12:33:55.388220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.870 [2024-06-10 12:33:55.388227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.870 [2024-06-10 12:33:55.388233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.870 [2024-06-10 12:33:55.388247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.870 qpair failed and we were unable to recover it. 00:29:49.870 [2024-06-10 12:33:55.398178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.870 [2024-06-10 12:33:55.398244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.398261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.398267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.398274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.398289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.408228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.408285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.408301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.408308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.408314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.408328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.418245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.418312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.418328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.418335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.418341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.418355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.428286] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.428352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.428371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.428379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.428385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.428399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.438229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.438283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.438299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.438307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.438313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.438326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.448329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.448382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.448398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.448405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.448411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.448425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.458356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.458443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.458460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.458467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.458473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.458487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:49.871 [2024-06-10 12:33:55.468307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.871 [2024-06-10 12:33:55.468371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.871 [2024-06-10 12:33:55.468387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.871 [2024-06-10 12:33:55.468394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.871 [2024-06-10 12:33:55.468401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:49.871 [2024-06-10 12:33:55.468417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.871 qpair failed and we were unable to recover it. 00:29:50.133 [2024-06-10 12:33:55.478410] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.133 [2024-06-10 12:33:55.478478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.133 [2024-06-10 12:33:55.478495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.133 [2024-06-10 12:33:55.478506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.133 [2024-06-10 12:33:55.478512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.133 [2024-06-10 12:33:55.478527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.133 qpair failed and we were unable to recover it. 00:29:50.133 [2024-06-10 12:33:55.488439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.133 [2024-06-10 12:33:55.488494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.133 [2024-06-10 12:33:55.488511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.133 [2024-06-10 12:33:55.488518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.133 [2024-06-10 12:33:55.488524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.133 [2024-06-10 12:33:55.488539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.133 qpair failed and we were unable to recover it. 00:29:50.133 [2024-06-10 12:33:55.498491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.133 [2024-06-10 12:33:55.498590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.133 [2024-06-10 12:33:55.498606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.133 [2024-06-10 12:33:55.498613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.133 [2024-06-10 12:33:55.498620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.133 [2024-06-10 12:33:55.498633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.133 qpair failed and we were unable to recover it. 00:29:50.133 [2024-06-10 12:33:55.508553] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.133 [2024-06-10 12:33:55.508659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.133 [2024-06-10 12:33:55.508675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.133 [2024-06-10 12:33:55.508683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.133 [2024-06-10 12:33:55.508689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.508703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.518508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.518562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.518582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.518589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.518595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.518609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.528562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.528619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.528635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.528642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.528648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.528661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.538592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.538648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.538665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.538672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.538678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.538691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.548482] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.548542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.548558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.548565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.548571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.548585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.558598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.558662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.558678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.558685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.558692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.558708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.568662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.568717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.568732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.568739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.568746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.568759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.578618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.578687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.578702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.578709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.578715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.578729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.588713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.588799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.588815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.588822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.588828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.588841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.598741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.598796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.598811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.598818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.598824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.598838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.608801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.608858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.608877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.608884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.608890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.608904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.618797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.618852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.618868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.618875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.618881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.618895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.628784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.628850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.628866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.628873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.628880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.628893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.638731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.134 [2024-06-10 12:33:55.638792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.134 [2024-06-10 12:33:55.638809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.134 [2024-06-10 12:33:55.638816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.134 [2024-06-10 12:33:55.638822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.134 [2024-06-10 12:33:55.638836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.134 qpair failed and we were unable to recover it. 00:29:50.134 [2024-06-10 12:33:55.648835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.648934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.648951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.648958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.648965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.648982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.658817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.658876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.658892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.658899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.658905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.658918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.668930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.669046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.669071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.669080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.669087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.669106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.679025] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.679082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.679100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.679108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.679115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.679129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.688980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.689041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.689058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.689065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.689072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.689086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.699046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.699114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.699138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.699145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.699151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.699165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.709015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.709078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.709095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.709102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.709108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.709122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.718973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.719030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.719046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.719053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.719060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.719073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.135 [2024-06-10 12:33:55.729081] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.135 [2024-06-10 12:33:55.729187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.135 [2024-06-10 12:33:55.729210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.135 [2024-06-10 12:33:55.729217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.135 [2024-06-10 12:33:55.729223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.135 [2024-06-10 12:33:55.729238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.135 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.739016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.739152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.739169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.739177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.739187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.739206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.749132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.749199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.749218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.749225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.749232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.749247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.759030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.759089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.759105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.759113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.759119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.759133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.769179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.769241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.769258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.769265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.769271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.769285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.779089] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.779147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.779163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.779171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.779177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.779191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.789239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.789302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.789318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.789325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.789331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.789345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.799229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.799288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.799303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.799310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.799317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.799330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.809226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.809296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.809312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.809319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.809325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.809338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.819316] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.819416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.819433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.819440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.819446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.819460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.829364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.829453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.398 [2024-06-10 12:33:55.829470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.398 [2024-06-10 12:33:55.829477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.398 [2024-06-10 12:33:55.829486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.398 [2024-06-10 12:33:55.829500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 qpair failed and we were unable to recover it. 00:29:50.398 [2024-06-10 12:33:55.839383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.398 [2024-06-10 12:33:55.839441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.839456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.839464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.839470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.839484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.849403] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.849458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.849474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.849481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.849487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.849501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.859440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.859505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.859521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.859528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.859535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.859548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.869505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.869569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.869585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.869592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.869599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.869612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.879509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.879565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.879580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.879587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.879593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.879607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.889526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.889583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.889600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.889607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.889613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.889627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.899565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.899641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.899657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.899664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.899671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.899685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.909590] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.909689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.909706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.909713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.909719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.909733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.919640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.919697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.919713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.919724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.919731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.919744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.929624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.929686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.929703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.929710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.929716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.929730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.939547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.939603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.939619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.939626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.939633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.939647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.949677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.949737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.949753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.949760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.949767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.949780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.959728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.959783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.959799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.959806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.959812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.399 [2024-06-10 12:33:55.959825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-06-10 12:33:55.969758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.399 [2024-06-10 12:33:55.969812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.399 [2024-06-10 12:33:55.969828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.399 [2024-06-10 12:33:55.969836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.399 [2024-06-10 12:33:55.969842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.400 [2024-06-10 12:33:55.969855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-06-10 12:33:55.979645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.400 [2024-06-10 12:33:55.979702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.400 [2024-06-10 12:33:55.979718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.400 [2024-06-10 12:33:55.979726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.400 [2024-06-10 12:33:55.979732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.400 [2024-06-10 12:33:55.979746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-06-10 12:33:55.989808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.400 [2024-06-10 12:33:55.989875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.400 [2024-06-10 12:33:55.989892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.400 [2024-06-10 12:33:55.989899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.400 [2024-06-10 12:33:55.989905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.400 [2024-06-10 12:33:55.989918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-06-10 12:33:55.999849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.400 [2024-06-10 12:33:55.999904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.400 [2024-06-10 12:33:55.999920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.400 [2024-06-10 12:33:55.999927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.400 [2024-06-10 12:33:55.999933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.400 [2024-06-10 12:33:55.999947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.009866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.009950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.009967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.009979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.009986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.010000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.019894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.019951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.019968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.019975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.019981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.019994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.029916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.029975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.029992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.029999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.030005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.030019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.039937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.039990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.040006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.040013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.040019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.040032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.049966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.050026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.050042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.050050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.050057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.050070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.059894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.059989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.060006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.060013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.060019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.060033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.070011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.070077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.070094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.070101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.070108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.070121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.080035] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.080093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.080110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.080117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.080123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.080137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.090014] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.090073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.090089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.090097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.090103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.090117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.100003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.100060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.100077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.662 [2024-06-10 12:33:56.100087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.662 [2024-06-10 12:33:56.100094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.662 [2024-06-10 12:33:56.100108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.662 qpair failed and we were unable to recover it. 00:29:50.662 [2024-06-10 12:33:56.110120] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.662 [2024-06-10 12:33:56.110181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.662 [2024-06-10 12:33:56.110202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.110210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.110217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.110230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.120144] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.120207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.120223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.120230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.120236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.120251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.130073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.130175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.130191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.130207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.130213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.130227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.140181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.140243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.140259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.140267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.140273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.140287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.150244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.150308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.150325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.150332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.150338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.150351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.160281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.160339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.160355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.160362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.160368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.160383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.170330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.170391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.170407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.170415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.170421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.170435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.180334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.180422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.180438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.180445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.180452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.180465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.190379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.190445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.190464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.190472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.190478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.190492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.200295] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.200351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.200368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.200375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.200381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.200394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.210423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.210484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.210499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.210507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.210513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.210526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.220357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.220414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.220429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.220436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.220443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.220457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.230478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.230539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.230555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.230562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.230569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.663 [2024-06-10 12:33:56.230582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.663 qpair failed and we were unable to recover it. 00:29:50.663 [2024-06-10 12:33:56.240514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.663 [2024-06-10 12:33:56.240569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.663 [2024-06-10 12:33:56.240585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.663 [2024-06-10 12:33:56.240592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.663 [2024-06-10 12:33:56.240598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.664 [2024-06-10 12:33:56.240612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.664 qpair failed and we were unable to recover it. 00:29:50.664 [2024-06-10 12:33:56.250531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.664 [2024-06-10 12:33:56.250617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.664 [2024-06-10 12:33:56.250633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.664 [2024-06-10 12:33:56.250640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.664 [2024-06-10 12:33:56.250646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.664 [2024-06-10 12:33:56.250660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.664 qpair failed and we were unable to recover it. 00:29:50.664 [2024-06-10 12:33:56.260638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.664 [2024-06-10 12:33:56.260696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.664 [2024-06-10 12:33:56.260711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.664 [2024-06-10 12:33:56.260719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.664 [2024-06-10 12:33:56.260725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.664 [2024-06-10 12:33:56.260738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.664 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.270548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.270608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.270625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.270633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.270639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.270653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.280600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.280666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.280685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.280692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.280699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.280712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.290678] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.290730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.290746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.290753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.290760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.290773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.300631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.300736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.300752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.300760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.300766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.300780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.310705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.310809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.310825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.310832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.310839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.310853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.320677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.320738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.320754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.320761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.320767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.320784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.330633] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.330696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.330712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.926 [2024-06-10 12:33:56.330719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.926 [2024-06-10 12:33:56.330725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.926 [2024-06-10 12:33:56.330738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.926 qpair failed and we were unable to recover it. 00:29:50.926 [2024-06-10 12:33:56.340786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.926 [2024-06-10 12:33:56.340848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.926 [2024-06-10 12:33:56.340864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.340872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.340878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.340892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.350802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.350886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.350903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.350912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.350918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.350932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.360845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.360901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.360917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.360924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.360930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.360944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.370761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.370817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.370836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.370844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.370850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.370863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.380897] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.380956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.380972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.380979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.380985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.380999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.390810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.390913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.390930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.390937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.390944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.390957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.400818] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.400875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.400891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.400898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.400904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.400918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.410969] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.411032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.411057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.411066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.411073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.411096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.420970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.421037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.421055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.421062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.421068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.421083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.431029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.431093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.431110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.431117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.431124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.431137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.440925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.440982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.440999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.441006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.441012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.441026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.451069] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.451147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.451164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.451171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.451178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.451192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.461122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.461236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.461259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.461266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.927 [2024-06-10 12:33:56.461273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.927 [2024-06-10 12:33:56.461286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.927 qpair failed and we were unable to recover it. 00:29:50.927 [2024-06-10 12:33:56.471098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.927 [2024-06-10 12:33:56.471161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.927 [2024-06-10 12:33:56.471177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.927 [2024-06-10 12:33:56.471185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.471191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.471211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:50.928 [2024-06-10 12:33:56.481078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.928 [2024-06-10 12:33:56.481178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.928 [2024-06-10 12:33:56.481200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.928 [2024-06-10 12:33:56.481207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.481214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.481228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:50.928 [2024-06-10 12:33:56.491201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.928 [2024-06-10 12:33:56.491259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.928 [2024-06-10 12:33:56.491275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.928 [2024-06-10 12:33:56.491283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.491289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.491303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:50.928 [2024-06-10 12:33:56.501208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.928 [2024-06-10 12:33:56.501264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.928 [2024-06-10 12:33:56.501280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.928 [2024-06-10 12:33:56.501287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.501297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.501310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:50.928 [2024-06-10 12:33:56.511241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.928 [2024-06-10 12:33:56.511302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.928 [2024-06-10 12:33:56.511318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.928 [2024-06-10 12:33:56.511325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.511332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.511345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:50.928 [2024-06-10 12:33:56.521307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.928 [2024-06-10 12:33:56.521364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.928 [2024-06-10 12:33:56.521380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.928 [2024-06-10 12:33:56.521387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.928 [2024-06-10 12:33:56.521393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:50.928 [2024-06-10 12:33:56.521407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.928 qpair failed and we were unable to recover it. 00:29:51.190 [2024-06-10 12:33:56.531289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.531393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.531409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.531417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.531423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.531436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.541327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.541384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.541402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.541409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.541416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.541430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.551331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.551400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.551416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.551423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.551430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.551443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.561405] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.561460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.561476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.561483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.561489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.561503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.571435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.571492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.571509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.571516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.571522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.571536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.581337] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.581396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.581413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.581420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.581426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.581440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.591483] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.591550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.591566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.591574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.591583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.591598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.601379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.601433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.601449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.601457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.601463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.601476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.611526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.611587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.611603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.611610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.611616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.611630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.621591] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.621682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.621698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.621706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.621712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.621726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.631581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.631676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.631693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.631700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.631706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.631719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.641623] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.641689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.641705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.641712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.641718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.641731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.651565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.651664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.651681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.191 [2024-06-10 12:33:56.651688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.191 [2024-06-10 12:33:56.651695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.191 [2024-06-10 12:33:56.651708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.191 qpair failed and we were unable to recover it. 00:29:51.191 [2024-06-10 12:33:56.661692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.191 [2024-06-10 12:33:56.661792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.191 [2024-06-10 12:33:56.661809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.661817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.661823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.661836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.671683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.671746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.671762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.671769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.671775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.671788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.681722] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.681815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.681831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.681838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.681848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.681862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.691761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.691840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.691856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.691864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.691870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.691884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.701807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.701867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.701883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.701891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.701897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.701910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.711811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.711914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.711932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.711939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.711945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.711958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.721837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.721903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.721929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.721937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.721945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.721963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.731874] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.731935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.731960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.731968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.731975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.731994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.741900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.741962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.741987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.741995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.742002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.742021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.751926] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.751994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.752020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.752028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.752036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.752054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.761838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.761896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.761915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.761922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.761929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.761944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.771987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.772079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.772096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.772108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.772116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.772130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.782023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.782078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.782094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.782102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.782108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.192 [2024-06-10 12:33:56.782122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.192 qpair failed and we were unable to recover it. 00:29:51.192 [2024-06-10 12:33:56.792033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.192 [2024-06-10 12:33:56.792094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.192 [2024-06-10 12:33:56.792111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.192 [2024-06-10 12:33:56.792118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.192 [2024-06-10 12:33:56.792124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.193 [2024-06-10 12:33:56.792137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.193 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.802058] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.802111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.802127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.802134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.802141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.802154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.812084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.812149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.812165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.812172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.812178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.812193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.822121] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.822180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.822201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.822208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.822215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.822228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.832149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.832243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.832260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.832267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.832273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.832287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.842231] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.842287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.842303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.842310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.842316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.842330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.852221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.852325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.852341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.852348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.852355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.852368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.862297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.862369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.862385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.862395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.862402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.862415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.872257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.872319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.872335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.872342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.872348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.872362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.882326] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.882429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.882445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.455 [2024-06-10 12:33:56.882453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.455 [2024-06-10 12:33:56.882459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.455 [2024-06-10 12:33:56.882473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.455 qpair failed and we were unable to recover it. 00:29:51.455 [2024-06-10 12:33:56.892329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.455 [2024-06-10 12:33:56.892385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.455 [2024-06-10 12:33:56.892402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.892408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.892415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.892428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.902360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.902417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.902433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.902443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.902450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.902464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.912374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.912437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.912453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.912460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.912467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.912480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.922300] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.922361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.922377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.922385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.922391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.922405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.932431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.932486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.932502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.932509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.932516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.932529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.942492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.942548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.942564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.942571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.942577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.942591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.952438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.952499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.952519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.952526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.952533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.952546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.962518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.962579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.962595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.962602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.962609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.962622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.972596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.972674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.972690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.972698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.972705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.972719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.982576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.982672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.982688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.982696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.982702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.982715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:56.992723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:56.992797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:56.992813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:56.992820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:56.992827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:56.992840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.456 [2024-06-10 12:33:57.002694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.456 [2024-06-10 12:33:57.002749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.456 [2024-06-10 12:33:57.002765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.456 [2024-06-10 12:33:57.002772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.456 [2024-06-10 12:33:57.002778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.456 [2024-06-10 12:33:57.002792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.456 qpair failed and we were unable to recover it. 00:29:51.457 [2024-06-10 12:33:57.012712] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.457 [2024-06-10 12:33:57.012774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.457 [2024-06-10 12:33:57.012791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.457 [2024-06-10 12:33:57.012798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.457 [2024-06-10 12:33:57.012805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.457 [2024-06-10 12:33:57.012819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.457 qpair failed and we were unable to recover it. 00:29:51.457 [2024-06-10 12:33:57.022756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.457 [2024-06-10 12:33:57.022886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.457 [2024-06-10 12:33:57.022903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.457 [2024-06-10 12:33:57.022910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.457 [2024-06-10 12:33:57.022916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.457 [2024-06-10 12:33:57.022929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.457 qpair failed and we were unable to recover it. 00:29:51.457 [2024-06-10 12:33:57.032733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.457 [2024-06-10 12:33:57.032798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.457 [2024-06-10 12:33:57.032814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.457 [2024-06-10 12:33:57.032821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.457 [2024-06-10 12:33:57.032827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.457 [2024-06-10 12:33:57.032841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.457 qpair failed and we were unable to recover it. 00:29:51.457 [2024-06-10 12:33:57.042766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.457 [2024-06-10 12:33:57.042828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.457 [2024-06-10 12:33:57.042848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.457 [2024-06-10 12:33:57.042855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.457 [2024-06-10 12:33:57.042861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.457 [2024-06-10 12:33:57.042875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.457 qpair failed and we were unable to recover it. 00:29:51.457 [2024-06-10 12:33:57.052778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.457 [2024-06-10 12:33:57.052844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.457 [2024-06-10 12:33:57.052860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.457 [2024-06-10 12:33:57.052867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.457 [2024-06-10 12:33:57.052873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.457 [2024-06-10 12:33:57.052887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.457 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.062704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.062766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.062783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.062790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.062796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.062810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.072849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.072944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.072961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.072968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.072974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.072987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.082870] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.082930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.082955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.082963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.082970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.082994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.092902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.093043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.093068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.093077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.093084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.093103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.102941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.102999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.103016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.103023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.103030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.103045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.112848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.112915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.112932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.112939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.112946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.721 [2024-06-10 12:33:57.112959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.721 qpair failed and we were unable to recover it. 00:29:51.721 [2024-06-10 12:33:57.122984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.721 [2024-06-10 12:33:57.123087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.721 [2024-06-10 12:33:57.123104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.721 [2024-06-10 12:33:57.123111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.721 [2024-06-10 12:33:57.123118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.123131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.133022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.133079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.133099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.133107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.133113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.133127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.143057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.143120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.143136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.143144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.143150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.143163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.153056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.153112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.153128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.153135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.153142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.153155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.163106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.163164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.163180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.163188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.163199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.163214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.173126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.173182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.173204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.173212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.173218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.173237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.183134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.183235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.183251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.183259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.183265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.183280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.193149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.193213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.193230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.193237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.193243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.193257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.203180] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.203242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.203259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.203266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.203272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.203286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.213254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.213307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.213323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.213331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.213337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.213351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.223264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.223320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.223342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.223350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.223356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.223370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.233248] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.233307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.233323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.233330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.233336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.233350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.243308] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.243360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.243376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.243383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.243390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.243403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.253336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.722 [2024-06-10 12:33:57.253393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.722 [2024-06-10 12:33:57.253409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.722 [2024-06-10 12:33:57.253416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.722 [2024-06-10 12:33:57.253423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.722 [2024-06-10 12:33:57.253436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.722 qpair failed and we were unable to recover it. 00:29:51.722 [2024-06-10 12:33:57.263373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.263434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.263450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.263457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.263467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.263481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.273376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.273431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.273447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.273454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.273461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.273474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.283427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.283482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.283498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.283505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.283512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.283525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.293333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.293430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.293447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.293454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.293460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.293473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.303504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.303566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.303582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.303589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.303595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.303609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.313507] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.313639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.313655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.313662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.313668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.313681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.723 [2024-06-10 12:33:57.323419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.723 [2024-06-10 12:33:57.323488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.723 [2024-06-10 12:33:57.323504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.723 [2024-06-10 12:33:57.323511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.723 [2024-06-10 12:33:57.323517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.723 [2024-06-10 12:33:57.323530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.723 qpair failed and we were unable to recover it. 00:29:51.985 [2024-06-10 12:33:57.333564] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.985 [2024-06-10 12:33:57.333665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.333681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.333689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.333695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.333708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.343604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.343661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.343677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.343684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.343690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.343703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.353571] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.353623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.353639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.353646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.353656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.353669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.363649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.363705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.363721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.363728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.363734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.363747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.373554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.373609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.373625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.373632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.373639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.373652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.383748] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.383808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.383824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.383831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.383837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.383851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.393697] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.393753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.393770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.393777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.393783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.393796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.403764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.403893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.403909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.403916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.403923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.403936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.413816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.413870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.413886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.413892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.413899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.413912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.423824] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.423885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.423901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.423908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.423914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.423927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.433813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.433865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.433881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.433888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.433894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.433907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.443880] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.443937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.443953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.443959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.443969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.443983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.453899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.453952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.986 [2024-06-10 12:33:57.453968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.986 [2024-06-10 12:33:57.453975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.986 [2024-06-10 12:33:57.453982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.986 [2024-06-10 12:33:57.453995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-06-10 12:33:57.463924] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.986 [2024-06-10 12:33:57.463984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.464000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.464007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.464013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.464027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.473927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.473982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.473998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.474005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.474011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.474025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.483982] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.484035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.484051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.484058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.484065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.484078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.493990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.494043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.494059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.494067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.494075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.494088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.504047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.504102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.504118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.504125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.504131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.504145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.514023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.514078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.514095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.514103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.514109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.514123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.523994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.524053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.524069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.524076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.524082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.524096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.534055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.534107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.534123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.534134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.534140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.534154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.544151] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.544215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.544231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.544239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.544245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.544258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.554286] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.554368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.554383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.554391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.554397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.554410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.564081] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.564136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.564152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.564160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.564168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.564183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.574073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.574125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.574142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.574149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.574156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.574169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-06-10 12:33:57.584254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.987 [2024-06-10 12:33:57.584314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.987 [2024-06-10 12:33:57.584330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.987 [2024-06-10 12:33:57.584337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.987 [2024-06-10 12:33:57.584343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:51.987 [2024-06-10 12:33:57.584357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.987 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.594252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.594315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.594332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.594339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.594345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.594360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.604258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.604313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.604329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.604336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.604343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.604356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.614290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.614350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.614367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.614374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.614380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.614393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.624398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.624455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.624471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.624482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.624488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.624501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.634297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.634363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.634379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.634386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.634392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.634406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.644301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.644354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.644370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.644377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.644383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.644397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.654436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.654488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.654505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.654512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.654518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.654531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.664449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.664512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.664528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.664535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.664541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.664555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.674461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.674516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.674532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.674539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.674545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.674559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.684500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.684554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.684569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.684577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.684583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.684597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.694486] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.694535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.250 [2024-06-10 12:33:57.694552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.250 [2024-06-10 12:33:57.694559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.250 [2024-06-10 12:33:57.694565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.250 [2024-06-10 12:33:57.694578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.250 qpair failed and we were unable to recover it. 00:29:52.250 [2024-06-10 12:33:57.704541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.250 [2024-06-10 12:33:57.704597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.704612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.704619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.704626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.704639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.714552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.714608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.714623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.714634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.714640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.714654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.724602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.724650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.724666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.724673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.724679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.724693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.734618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.734668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.734684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.734691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.734698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.734711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.744707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.744765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.744781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.744788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.744794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.744808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.754652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.754738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.754756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.754764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.754770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.754784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.764707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.764754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.764770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.764777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.764784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.764797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.774741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.774838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.774854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.774861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.774868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.774881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.784807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.784863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.784880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.784887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.784893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.784906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.794795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.794855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.794871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.794879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.794885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.794898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.804825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.804877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.804897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.804905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.804911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.804924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.814851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.814905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.814921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.814928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.814935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.814948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.824916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.824973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.824989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.824996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.825002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.251 [2024-06-10 12:33:57.825016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.251 qpair failed and we were unable to recover it. 00:29:52.251 [2024-06-10 12:33:57.834902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.251 [2024-06-10 12:33:57.834963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.251 [2024-06-10 12:33:57.834980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.251 [2024-06-10 12:33:57.834987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.251 [2024-06-10 12:33:57.834993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.252 [2024-06-10 12:33:57.835006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.252 qpair failed and we were unable to recover it. 00:29:52.252 [2024-06-10 12:33:57.844951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.252 [2024-06-10 12:33:57.845002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.252 [2024-06-10 12:33:57.845018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.252 [2024-06-10 12:33:57.845025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.252 [2024-06-10 12:33:57.845031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.252 [2024-06-10 12:33:57.845048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.252 qpair failed and we were unable to recover it. 00:29:52.514 [2024-06-10 12:33:57.855005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.514 [2024-06-10 12:33:57.855083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.514 [2024-06-10 12:33:57.855100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.514 [2024-06-10 12:33:57.855107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.855114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.855127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.864995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.865077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.865093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.865100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.865107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.865121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.875010] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.875070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.875086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.875094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.875101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.875114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.884925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.884979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.884995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.885002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.885008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.885022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.895134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.895189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.895214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.895222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.895228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.895242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.905204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.905267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.905283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.905290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.905296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.905310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.915136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.915197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.915214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.915221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.915228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.915241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.925172] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.925228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.925244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.925252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.925258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.925272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.935072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.935128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.935145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.935152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.935158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.935175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.945255] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.945314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.945330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.945338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.945344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.945357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.955270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.955357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.955373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.955380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.955386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.955400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.965150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.965208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.965225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.965232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.965238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.965252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.975285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.975342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.515 [2024-06-10 12:33:57.975358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.515 [2024-06-10 12:33:57.975365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.515 [2024-06-10 12:33:57.975371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.515 [2024-06-10 12:33:57.975385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-06-10 12:33:57.985380] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.515 [2024-06-10 12:33:57.985437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:57.985460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:57.985468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:57.985474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:57.985487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:57.995354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:57.995406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:57.995422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:57.995429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:57.995435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:57.995449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.005374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.005423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.005439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.005446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.005452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.005467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.015428] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.015483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.015499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.015506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.015513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.015526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.025403] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.025462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.025478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.025485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.025491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.025509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.035467] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.035527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.035543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.035550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.035556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.035570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.045490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.045552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.045568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.045575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.045581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.045595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.055482] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.055539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.055554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.055562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.055568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.055581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.065550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.065649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.065666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.065673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.065679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.065693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.075562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.075626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.075646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.075653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.075659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.075672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.085600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.085650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.085666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.085673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.085679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.085693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.095653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.095707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.516 [2024-06-10 12:33:58.095724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.516 [2024-06-10 12:33:58.095731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.516 [2024-06-10 12:33:58.095737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.516 [2024-06-10 12:33:58.095751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-06-10 12:33:58.105666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.516 [2024-06-10 12:33:58.105728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.517 [2024-06-10 12:33:58.105744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.517 [2024-06-10 12:33:58.105751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.517 [2024-06-10 12:33:58.105757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.517 [2024-06-10 12:33:58.105771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-06-10 12:33:58.115678] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.517 [2024-06-10 12:33:58.115744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.517 [2024-06-10 12:33:58.115760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.517 [2024-06-10 12:33:58.115767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.517 [2024-06-10 12:33:58.115777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.517 [2024-06-10 12:33:58.115790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.125624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.125685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.125701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.125708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.125715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.125728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.135782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.135838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.135854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.135861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.135867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.135880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.145801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.145859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.145876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.145883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.145889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.145902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.155790] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.155849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.155866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.155873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.155879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.155892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.165798] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.165852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.165868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.165875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.165882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.165895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.175910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.176001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.176017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.176025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.176031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.176045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.185837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.185931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.185948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.185955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.779 [2024-06-10 12:33:58.185961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.779 [2024-06-10 12:33:58.185974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.779 qpair failed and we were unable to recover it. 00:29:52.779 [2024-06-10 12:33:58.195900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.779 [2024-06-10 12:33:58.195956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.779 [2024-06-10 12:33:58.195972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.779 [2024-06-10 12:33:58.195979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.195985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.195998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.205923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.205972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.205988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.205995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.206005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.206018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.216003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.216060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.216076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.216083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.216089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.216103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.226036] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.226093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.226108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.226116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.226122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.226136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.235997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.236056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.236074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.236081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.236091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.236106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.245910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.245963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.245980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.245987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.245994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.246009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.256055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.256115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.256132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.256139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.256146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.256160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.266130] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.266189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.266211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.266219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.266225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.266239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.275992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.276049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.276065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.276072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.276078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.276092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.286139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.286209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.286225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.286232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.286238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.286253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.296207] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.296300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.296316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.296327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.296333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.296346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.306215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.306271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.306287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.306294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.306300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.306314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.316223] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.316282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.316298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.316305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.316311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.316325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.780 [2024-06-10 12:33:58.326292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.780 [2024-06-10 12:33:58.326381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.780 [2024-06-10 12:33:58.326398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.780 [2024-06-10 12:33:58.326405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.780 [2024-06-10 12:33:58.326411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.780 [2024-06-10 12:33:58.326425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.780 qpair failed and we were unable to recover it. 00:29:52.781 [2024-06-10 12:33:58.336173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.781 [2024-06-10 12:33:58.336230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.781 [2024-06-10 12:33:58.336246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.781 [2024-06-10 12:33:58.336253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.781 [2024-06-10 12:33:58.336259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.781 [2024-06-10 12:33:58.336272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.781 qpair failed and we were unable to recover it. 00:29:52.781 [2024-06-10 12:33:58.346299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.781 [2024-06-10 12:33:58.346358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.781 [2024-06-10 12:33:58.346374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.781 [2024-06-10 12:33:58.346382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.781 [2024-06-10 12:33:58.346388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.781 [2024-06-10 12:33:58.346401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.781 qpair failed and we were unable to recover it. 00:29:52.781 [2024-06-10 12:33:58.356322] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.781 [2024-06-10 12:33:58.356387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.781 [2024-06-10 12:33:58.356403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.781 [2024-06-10 12:33:58.356410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.781 [2024-06-10 12:33:58.356417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.781 [2024-06-10 12:33:58.356430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.781 qpair failed and we were unable to recover it. 00:29:52.781 [2024-06-10 12:33:58.366422] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.781 [2024-06-10 12:33:58.366491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.781 [2024-06-10 12:33:58.366507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.781 [2024-06-10 12:33:58.366514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.781 [2024-06-10 12:33:58.366520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.781 [2024-06-10 12:33:58.366534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.781 qpair failed and we were unable to recover it. 00:29:52.781 [2024-06-10 12:33:58.376369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.781 [2024-06-10 12:33:58.376427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.781 [2024-06-10 12:33:58.376443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.781 [2024-06-10 12:33:58.376450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.781 [2024-06-10 12:33:58.376457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:52.781 [2024-06-10 12:33:58.376470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.781 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.386435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.386493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.386510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.386520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.386527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.386540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.396441] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.396497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.396513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.396520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.396526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.396539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.406476] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.406528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.406544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.406551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.406557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.406571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.416370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.416429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.416445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.416452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.416458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.416471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.426560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.426622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.426638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.426645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.426651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.426665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.436527] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.436588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.436604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.436611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.436617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.436630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.446448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.446501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.446517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.446524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.446530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.446544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.456599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.456650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.456665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.456673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.456679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.456692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.466683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.466740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.466756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.466763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.466769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.466783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.476668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.044 [2024-06-10 12:33:58.476721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.044 [2024-06-10 12:33:58.476738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.044 [2024-06-10 12:33:58.476748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.044 [2024-06-10 12:33:58.476754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.044 [2024-06-10 12:33:58.476767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-06-10 12:33:58.486693] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.486797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.486813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.486820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.486827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.486840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.496717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.496775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.496790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.496798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.496804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.496817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.506790] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.506849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.506865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.506872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.506878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.506892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.516802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.516862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.516878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.516885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.516892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.516905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.526793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.526845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.526861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.526869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.526875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.526889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.536843] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.536896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.536912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.536920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.536926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.536939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.546914] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.547006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.547022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.547029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.547036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.547049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.556889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.556985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.557002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.557010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.557016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.557030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.566802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.566905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.566925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.566932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.566938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.566953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.576877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.576930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.576946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.576954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.576959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.576973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.586942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.587002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.587018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.587025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.587031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.587045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.596991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.597061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.597086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.597094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.597102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.597120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.607022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.607113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.607131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.045 [2024-06-10 12:33:58.607138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.045 [2024-06-10 12:33:58.607145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.045 [2024-06-10 12:33:58.607160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-06-10 12:33:58.617047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.045 [2024-06-10 12:33:58.617102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.045 [2024-06-10 12:33:58.617119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.046 [2024-06-10 12:33:58.617126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.046 [2024-06-10 12:33:58.617133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.046 [2024-06-10 12:33:58.617146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-06-10 12:33:58.627117] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.046 [2024-06-10 12:33:58.627175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.046 [2024-06-10 12:33:58.627191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.046 [2024-06-10 12:33:58.627203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.046 [2024-06-10 12:33:58.627209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.046 [2024-06-10 12:33:58.627223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-06-10 12:33:58.637124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.046 [2024-06-10 12:33:58.637191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.046 [2024-06-10 12:33:58.637211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.046 [2024-06-10 12:33:58.637218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.046 [2024-06-10 12:33:58.637224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.046 [2024-06-10 12:33:58.637239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.647120] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.647171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.647187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.647200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.647206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.647221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.657154] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.657213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.657232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.657239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.657246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.657260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.667259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.667339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.667355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.667362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.667370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.667383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.677138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.677202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.677218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.677225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.677231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.677245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.687225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.687273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.687289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.687297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.687303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.687316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.697266] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.697321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.697337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.697345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.697351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.697368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.707337] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.707396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.707412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.707419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.707426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.707439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.717376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.717434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.717450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.717457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.717463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.717477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.727260] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.727308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.727324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.727331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.727337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.727351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.737379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.737431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.737447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.737454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.737460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.737474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.747444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.747504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.747528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.747535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.747541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.747555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.757433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.757493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.757509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.757516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.757522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.757535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.767330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.309 [2024-06-10 12:33:58.767380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.309 [2024-06-10 12:33:58.767397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.309 [2024-06-10 12:33:58.767404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.309 [2024-06-10 12:33:58.767410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.309 [2024-06-10 12:33:58.767423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.309 qpair failed and we were unable to recover it. 00:29:53.309 [2024-06-10 12:33:58.777483] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.777541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.777556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.777564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.777570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.777583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.787430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.787489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.787505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.787512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.787518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.787536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.797529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.797596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.797612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.797619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.797626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.797639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.807579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.807661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.807677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.807684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.807691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.807704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.817648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.817694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.817710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.817717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.817723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.817737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.827622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.827682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.827698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.827704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.827711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.827724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.837641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.837698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.837717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.837724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.837730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.837743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.847628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.847679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.847695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.847702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.847708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.847722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.857746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.857844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.857860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.857868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.857874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.857888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.867774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.867866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.867882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.867889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.867896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.867909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.877631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.877687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.877703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.877710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.877720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.877733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.887788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.887843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.887858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.887865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.887872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.887885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.897814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.897865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.897881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.897888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.310 [2024-06-10 12:33:58.897894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.310 [2024-06-10 12:33:58.897908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.310 qpair failed and we were unable to recover it. 00:29:53.310 [2024-06-10 12:33:58.907786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.310 [2024-06-10 12:33:58.907850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.310 [2024-06-10 12:33:58.907866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.310 [2024-06-10 12:33:58.907873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.311 [2024-06-10 12:33:58.907879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.311 [2024-06-10 12:33:58.907892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.311 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.917844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.917900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.917916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.917923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.917929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.917942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.927909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.927966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.927982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.927989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.927995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.928009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.938001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.938059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.938076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.938083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.938089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.938103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.948001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.948058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.948074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.948081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.948087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.948101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.958000] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.958055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.958071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.958077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.958084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.958098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.968010] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.968067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.968082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.968089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.968099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.968113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.978029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.978084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.978100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.978107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.978113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.978126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.988098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.988153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.573 [2024-06-10 12:33:58.988170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.573 [2024-06-10 12:33:58.988177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.573 [2024-06-10 12:33:58.988183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.573 [2024-06-10 12:33:58.988202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.573 qpair failed and we were unable to recover it. 00:29:53.573 [2024-06-10 12:33:58.998114] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.573 [2024-06-10 12:33:58.998171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:58.998187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:58.998198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:58.998205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:58.998219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.008008] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.008060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.008076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.008083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.008089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.008103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.018067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.018129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.018145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.018152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.018158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.018171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.028183] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.028246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.028262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.028269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.028276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.028290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.038226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.038369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.038385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.038392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.038398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.038412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.048242] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.048294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.048309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.048317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.048323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.048337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.058274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.058365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.058382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.058389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.058400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.058415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.068336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.068395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.068410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.068417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.068424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.068437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.078331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.078389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.078405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.078412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.078419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.078432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.088331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.088386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.088402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.088409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.088415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.088429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.098372] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.098427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.098443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.098450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.098456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.098470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.108454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.108512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.108528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.108536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.108542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.108556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.118447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.118504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.118520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.574 [2024-06-10 12:33:59.118528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.574 [2024-06-10 12:33:59.118534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.574 [2024-06-10 12:33:59.118547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.574 qpair failed and we were unable to recover it. 00:29:53.574 [2024-06-10 12:33:59.128366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.574 [2024-06-10 12:33:59.128417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.574 [2024-06-10 12:33:59.128433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.575 [2024-06-10 12:33:59.128440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.575 [2024-06-10 12:33:59.128447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.575 [2024-06-10 12:33:59.128460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.575 qpair failed and we were unable to recover it. 00:29:53.575 [2024-06-10 12:33:59.138483] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.575 [2024-06-10 12:33:59.138534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.575 [2024-06-10 12:33:59.138550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.575 [2024-06-10 12:33:59.138557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.575 [2024-06-10 12:33:59.138564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.575 [2024-06-10 12:33:59.138577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.575 qpair failed and we were unable to recover it. 00:29:53.575 [2024-06-10 12:33:59.148561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.575 [2024-06-10 12:33:59.148618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.575 [2024-06-10 12:33:59.148633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.575 [2024-06-10 12:33:59.148644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.575 [2024-06-10 12:33:59.148650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.575 [2024-06-10 12:33:59.148663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.575 qpair failed and we were unable to recover it. 00:29:53.575 [2024-06-10 12:33:59.158532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.575 [2024-06-10 12:33:59.158591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.575 [2024-06-10 12:33:59.158607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.575 [2024-06-10 12:33:59.158614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.575 [2024-06-10 12:33:59.158620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.575 [2024-06-10 12:33:59.158633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.575 qpair failed and we were unable to recover it. 00:29:53.575 [2024-06-10 12:33:59.168550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.575 [2024-06-10 12:33:59.168599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.575 [2024-06-10 12:33:59.168615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.575 [2024-06-10 12:33:59.168622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.575 [2024-06-10 12:33:59.168628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.575 [2024-06-10 12:33:59.168641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.575 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.178541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.178592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.837 [2024-06-10 12:33:59.178608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.837 [2024-06-10 12:33:59.178615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.837 [2024-06-10 12:33:59.178622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.837 [2024-06-10 12:33:59.178635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.837 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.188652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.188712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.837 [2024-06-10 12:33:59.188727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.837 [2024-06-10 12:33:59.188735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.837 [2024-06-10 12:33:59.188741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.837 [2024-06-10 12:33:59.188755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.837 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.198657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.198710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.837 [2024-06-10 12:33:59.198727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.837 [2024-06-10 12:33:59.198734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.837 [2024-06-10 12:33:59.198740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.837 [2024-06-10 12:33:59.198754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.837 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.208652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.208748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.837 [2024-06-10 12:33:59.208765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.837 [2024-06-10 12:33:59.208772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.837 [2024-06-10 12:33:59.208778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.837 [2024-06-10 12:33:59.208792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.837 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.218717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.218764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.837 [2024-06-10 12:33:59.218780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.837 [2024-06-10 12:33:59.218787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.837 [2024-06-10 12:33:59.218794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.837 [2024-06-10 12:33:59.218807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.837 qpair failed and we were unable to recover it. 00:29:53.837 [2024-06-10 12:33:59.228718] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.837 [2024-06-10 12:33:59.228766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.228782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.228789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.228795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.228808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.238673] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.238726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.238743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.238753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.238759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.238772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.248772] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.248824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.248840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.248847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.248853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.248867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.258806] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.258858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.258874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.258882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.258888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.258902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.268734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.268785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.268801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.268808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.268814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.268828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.278851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.278905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.278921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.278928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.278934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.278947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.288854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.288941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.288967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.288975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.288982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.289001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.298804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.298857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.298876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.298884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.298890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.298906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.308957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.309013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.309038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.309046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.309053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.309071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.318892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.318946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.318964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.318972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.318978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.318993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.329002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.329053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.329074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.329082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.329088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.329103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.339014] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.339064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.339081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.339088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.339094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.339109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.349046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.349097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.349113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.349121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.349127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.349141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.838 [2024-06-10 12:33:59.359069] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.838 [2024-06-10 12:33:59.359132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.838 [2024-06-10 12:33:59.359149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.838 [2024-06-10 12:33:59.359156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.838 [2024-06-10 12:33:59.359162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.838 [2024-06-10 12:33:59.359175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.838 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.369079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.369142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.369158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.369166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.369172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.369186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.379134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.379182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.379205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.379213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.379219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.379233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.389156] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.389214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.389230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.389237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.389244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.389258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.399189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.399328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.399344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.399351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.399358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.399371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.409213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.409265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.409281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.409288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.409295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.409309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.419110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.419167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.419186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.419199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.419206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.419220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.429255] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.429306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.429322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.429329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.429335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.429349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:53.839 [2024-06-10 12:33:59.439165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.839 [2024-06-10 12:33:59.439222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.839 [2024-06-10 12:33:59.439238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.839 [2024-06-10 12:33:59.439245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.839 [2024-06-10 12:33:59.439251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:53.839 [2024-06-10 12:33:59.439265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.839 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.449300] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.449363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.449378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.449386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.449392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.449406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.459359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.459412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.459428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.459435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.459441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.459459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.469363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.469417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.469433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.469440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.469446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.469460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.479377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.479435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.479451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.479458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.479465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.479478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.489299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.489348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.489364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.489371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.489378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.489391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.499444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.499506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.499522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.499529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.499536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.499549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.509484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.509534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.509557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.509564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.509570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.509584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.519496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.519561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.519577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.519584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.519590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.519604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.529535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.529584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.529600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.529607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.529613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.529627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.539551] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.539601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.539617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.539624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.539631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.102 [2024-06-10 12:33:59.539644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.102 qpair failed and we were unable to recover it. 00:29:54.102 [2024-06-10 12:33:59.549470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.102 [2024-06-10 12:33:59.549520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.102 [2024-06-10 12:33:59.549536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.102 [2024-06-10 12:33:59.549543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.102 [2024-06-10 12:33:59.549549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.549566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.559613] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.559665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.559681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.559688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.559694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.559708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.569653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.569702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.569717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.569724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.569731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.569744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.579685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.579737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.579753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.579761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.579767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.579781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.589733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.589785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.589801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.589808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.589814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.589828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.599728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.599786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.599805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.599812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.599818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.599831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.609727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.609780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.609796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.609804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.609810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.609824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.619775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.619835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.619852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.619864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.619871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.619886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.629832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.629909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.629925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.629932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.629939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.629953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.639816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.639887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.639912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.639920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.639932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.639951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.649771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.649823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.649840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.649847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.649853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.649869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.659862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.659910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.659926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.659933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.659939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.659953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.669806] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.669865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.669889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.669898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.669905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.669925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.679985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.680047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.103 [2024-06-10 12:33:59.680072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.103 [2024-06-10 12:33:59.680080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.103 [2024-06-10 12:33:59.680088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.103 [2024-06-10 12:33:59.680106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.103 qpair failed and we were unable to recover it. 00:29:54.103 [2024-06-10 12:33:59.689962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.103 [2024-06-10 12:33:59.690019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.104 [2024-06-10 12:33:59.690037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.104 [2024-06-10 12:33:59.690044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.104 [2024-06-10 12:33:59.690051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.104 [2024-06-10 12:33:59.690065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.104 qpair failed and we were unable to recover it. 00:29:54.104 [2024-06-10 12:33:59.699989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.104 [2024-06-10 12:33:59.700069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.104 [2024-06-10 12:33:59.700086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.104 [2024-06-10 12:33:59.700093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.104 [2024-06-10 12:33:59.700100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.104 [2024-06-10 12:33:59.700114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.104 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.710013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.710066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.365 [2024-06-10 12:33:59.710082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.365 [2024-06-10 12:33:59.710089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.365 [2024-06-10 12:33:59.710096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.365 [2024-06-10 12:33:59.710110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.365 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.720040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.720091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.365 [2024-06-10 12:33:59.720107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.365 [2024-06-10 12:33:59.720114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.365 [2024-06-10 12:33:59.720121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.365 [2024-06-10 12:33:59.720134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.365 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.730043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.730090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.365 [2024-06-10 12:33:59.730106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.365 [2024-06-10 12:33:59.730113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.365 [2024-06-10 12:33:59.730124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.365 [2024-06-10 12:33:59.730138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.365 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.740070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.740122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.365 [2024-06-10 12:33:59.740138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.365 [2024-06-10 12:33:59.740145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.365 [2024-06-10 12:33:59.740151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.365 [2024-06-10 12:33:59.740165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.365 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.750118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.750172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.365 [2024-06-10 12:33:59.750190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.365 [2024-06-10 12:33:59.750204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.365 [2024-06-10 12:33:59.750211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.365 [2024-06-10 12:33:59.750226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.365 qpair failed and we were unable to recover it. 00:29:54.365 [2024-06-10 12:33:59.760078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.365 [2024-06-10 12:33:59.760132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.760149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.760156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.760163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.760176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.770189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.770244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.770260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.770267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.770273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.770287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.780206] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.780267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.780283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.780290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.780296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.780310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.790233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.790286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.790301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.790309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.790315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.790329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.800180] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.800239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.800256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.800264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.800271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.800285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.810296] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.810344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.810360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.810367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.810373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.810388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.820334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.820382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.820398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.820405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.820415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.820428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.830357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.830408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.830424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.830431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.830437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.830452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.840370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.840424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.840441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.840448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.840454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.840467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.850299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.850353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.850369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.850376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.850382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.850396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.860427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.860478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.860494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.860502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.860508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.860521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.870391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.870444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.870460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.870467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.870473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.870487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.880491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.880547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.880562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.880570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.366 [2024-06-10 12:33:59.880576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.366 [2024-06-10 12:33:59.880590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.366 qpair failed and we were unable to recover it. 00:29:54.366 [2024-06-10 12:33:59.890443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.366 [2024-06-10 12:33:59.890492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.366 [2024-06-10 12:33:59.890508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.366 [2024-06-10 12:33:59.890515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.890521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.890536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.900546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.900623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.900639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.900646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.900652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.900666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.910571] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.910618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.910634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.910644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.910651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.910664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.920613] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.920672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.920688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.920695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.920701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.920714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.930643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.930692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.930707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.930714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.930720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.930734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.940602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.940662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.940678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.940685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.940692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.940705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.950681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.950734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.950749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.950757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.950763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.950777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 [2024-06-10 12:33:59.960700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.367 [2024-06-10 12:33:59.960757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.367 [2024-06-10 12:33:59.960773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.367 [2024-06-10 12:33:59.960780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.367 [2024-06-10 12:33:59.960787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x141d8c0 00:29:54.367 [2024-06-10 12:33:59.960800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:54.367 qpair failed and we were unable to recover it. 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Read completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 Write completed with error (sct=0, sc=8) 00:29:54.367 starting I/O failed 00:29:54.367 [2024-06-10 12:33:59.961720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.628 [2024-06-10 12:33:59.970749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.628 [2024-06-10 12:33:59.970864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.628 [2024-06-10 12:33:59.970915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.628 [2024-06-10 12:33:59.970938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.628 [2024-06-10 12:33:59.970959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa8c000b90 00:29:54.629 [2024-06-10 12:33:59.971004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:33:59.980945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.629 [2024-06-10 12:33:59.981022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.629 [2024-06-10 12:33:59.981056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.629 [2024-06-10 12:33:59.981070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.629 [2024-06-10 12:33:59.981083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa8c000b90 00:29:54.629 [2024-06-10 12:33:59.981111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:33:59.981514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b5d0 is same with the state(5) to be set 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Write completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 Read completed with error (sct=0, sc=8) 00:29:54.629 starting I/O failed 00:29:54.629 [2024-06-10 12:33:59.982394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.629 [2024-06-10 12:33:59.990791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.629 [2024-06-10 12:33:59.990901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.629 [2024-06-10 12:33:59.990951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.629 [2024-06-10 12:33:59.990974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.629 [2024-06-10 12:33:59.990994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa9c000b90 00:29:54.629 [2024-06-10 12:33:59.991040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:34:00.000892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.629 [2024-06-10 12:34:00.000988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.629 [2024-06-10 12:34:00.001051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.629 [2024-06-10 12:34:00.001074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.629 [2024-06-10 12:34:00.001091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa9c000b90 00:29:54.629 [2024-06-10 12:34:00.001627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:34:00.010903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.629 [2024-06-10 12:34:00.010952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.629 [2024-06-10 12:34:00.010971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.629 [2024-06-10 12:34:00.010977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.629 [2024-06-10 12:34:00.010982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa94000b90 00:29:54.629 [2024-06-10 12:34:00.010996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:34:00.020855] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.629 [2024-06-10 12:34:00.020944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.629 [2024-06-10 12:34:00.020957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.629 [2024-06-10 12:34:00.020963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.629 [2024-06-10 12:34:00.020968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffa94000b90 00:29:54.629 [2024-06-10 12:34:00.020980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:54.629 qpair failed and we were unable to recover it. 00:29:54.629 [2024-06-10 12:34:00.021656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141b5d0 (9): Bad file descriptor 00:29:54.629 Initializing NVMe Controllers 00:29:54.629 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:54.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:54.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:54.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:54.629 Initialization complete. Launching workers. 00:29:54.629 Starting thread on core 1 00:29:54.629 Starting thread on core 2 00:29:54.629 Starting thread on core 3 00:29:54.629 Starting thread on core 0 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:54.629 00:29:54.629 real 0m11.278s 00:29:54.629 user 0m21.548s 00:29:54.629 sys 0m3.627s 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.629 ************************************ 00:29:54.629 END TEST nvmf_target_disconnect_tc2 00:29:54.629 ************************************ 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.629 rmmod nvme_tcp 00:29:54.629 rmmod nvme_fabrics 00:29:54.629 rmmod nvme_keyring 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 854043 ']' 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 854043 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 854043 ']' 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 854043 00:29:54.629 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 854043 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 854043' 00:29:54.630 killing process with pid 854043 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 854043 00:29:54.630 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 854043 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.889 12:34:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.803 12:34:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.803 00:29:56.803 real 0m22.234s 00:29:56.803 user 0m49.133s 00:29:56.803 sys 0m10.119s 00:29:56.803 12:34:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:56.803 12:34:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:56.803 ************************************ 00:29:56.803 END TEST nvmf_target_disconnect 00:29:56.803 ************************************ 00:29:57.099 12:34:02 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:57.099 12:34:02 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:57.099 12:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.099 12:34:02 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:57.099 00:29:57.099 real 23m18.565s 00:29:57.099 user 47m53.301s 00:29:57.099 sys 7m33.472s 00:29:57.099 12:34:02 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:57.100 12:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.100 ************************************ 00:29:57.100 END TEST nvmf_tcp 00:29:57.100 ************************************ 00:29:57.100 12:34:02 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:57.100 12:34:02 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:57.100 12:34:02 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:57.100 12:34:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:57.100 12:34:02 -- common/autotest_common.sh@10 -- # set +x 00:29:57.100 ************************************ 00:29:57.100 START TEST spdkcli_nvmf_tcp 00:29:57.100 ************************************ 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:57.100 * Looking for test storage... 00:29:57.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=855870 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 855870 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 855870 ']' 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:57.100 12:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.364 [2024-06-10 12:34:02.739775] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:29:57.364 [2024-06-10 12:34:02.739851] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855870 ] 00:29:57.364 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.364 [2024-06-10 12:34:02.811449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.364 [2024-06-10 12:34:02.887470] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.364 [2024-06-10 12:34:02.887569] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.938 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:57.938 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:57.938 12:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:57.938 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:57.938 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.199 12:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:58.199 12:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:58.199 12:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:58.199 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:58.200 12:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.200 12:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:58.200 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:58.200 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:58.200 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:58.200 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:58.200 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:58.200 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:58.200 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.200 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.200 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:58.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:58.200 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:58.200 ' 00:30:00.748 [2024-06-10 12:34:05.874239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.691 [2024-06-10 12:34:07.038103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:04.236 [2024-06-10 12:34:09.332766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:06.149 [2024-06-10 12:34:11.410940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:07.538 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:07.538 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:07.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:07.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:07.538 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:07.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:07.538 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:07.538 12:34:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.110 12:34:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:08.110 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:08.110 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:08.110 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:08.110 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:08.110 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:08.110 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:08.110 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:08.110 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:08.110 ' 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:13.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:13.396 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:13.396 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:13.396 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 855870 ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 855870' 00:30:13.396 killing process with pid 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 855870 ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 855870 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 855870 ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 855870 00:30:13.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (855870) - No such process 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 855870 is not found' 00:30:13.396 Process with pid 855870 is not found 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:13.396 00:30:13.396 real 0m16.099s 00:30:13.396 user 0m33.864s 00:30:13.396 sys 0m0.779s 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:13.396 12:34:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 ************************************ 00:30:13.396 END TEST spdkcli_nvmf_tcp 00:30:13.396 ************************************ 00:30:13.396 12:34:18 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:13.396 12:34:18 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:13.396 12:34:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:13.396 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 ************************************ 00:30:13.396 START TEST nvmf_identify_passthru 00:30:13.396 ************************************ 00:30:13.396 12:34:18 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:13.396 * Looking for test storage... 00:30:13.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.396 12:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.396 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.397 12:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.397 12:34:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:13.397 12:34:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.397 12:34:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.397 12:34:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.397 12:34:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:13.397 12:34:18 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.397 12:34:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.539 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.539 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.539 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.539 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.539 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:21.540 12:34:26 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:21.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:30:21.540 00:30:21.540 --- 10.0.0.2 ping statistics --- 00:30:21.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.540 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:30:21.540 00:30:21.540 --- 10.0.0.1 ping statistics --- 00:30:21.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.540 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.540 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.540 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:21.540 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:21.540 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:30:21.801 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:30:21.801 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:30:21.801 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:30:21.801 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:21.801 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:21.801 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:21.801 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:21.801 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:21.801 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.373 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:30:22.373 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:22.373 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:22.373 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:22.373 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:22.633 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:22.633 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:22.633 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:22.633 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=863477 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:22.633 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 863477 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 863477 ']' 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:22.634 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.894 [2024-06-10 12:34:28.256423] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:30:22.894 [2024-06-10 12:34:28.256485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.894 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.894 [2024-06-10 12:34:28.332149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.894 [2024-06-10 12:34:28.405835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.894 [2024-06-10 12:34:28.405875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.894 [2024-06-10 12:34:28.405883] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.894 [2024-06-10 12:34:28.405889] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.894 [2024-06-10 12:34:28.405894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.894 [2024-06-10 12:34:28.406029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.894 [2024-06-10 12:34:28.406144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.894 [2024-06-10 12:34:28.406305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.894 [2024-06-10 12:34:28.406305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:30:23.466 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.466 INFO: Log level set to 20 00:30:23.466 INFO: Requests: 00:30:23.466 { 00:30:23.466 "jsonrpc": "2.0", 00:30:23.466 "method": "nvmf_set_config", 00:30:23.466 "id": 1, 00:30:23.466 "params": { 00:30:23.466 "admin_cmd_passthru": { 00:30:23.466 "identify_ctrlr": true 00:30:23.466 } 00:30:23.466 } 00:30:23.466 } 00:30:23.466 00:30:23.466 INFO: response: 00:30:23.466 { 00:30:23.466 "jsonrpc": "2.0", 00:30:23.466 "id": 1, 00:30:23.466 "result": true 00:30:23.466 } 00:30:23.466 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.466 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.466 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.466 INFO: Setting log level to 20 00:30:23.466 INFO: Setting log level to 20 00:30:23.466 INFO: Log level set to 20 00:30:23.466 INFO: Log level set to 20 00:30:23.466 INFO: Requests: 00:30:23.466 { 00:30:23.466 "jsonrpc": "2.0", 00:30:23.466 "method": "framework_start_init", 00:30:23.466 "id": 1 00:30:23.466 } 00:30:23.466 00:30:23.466 INFO: Requests: 00:30:23.466 { 00:30:23.466 "jsonrpc": "2.0", 00:30:23.466 "method": "framework_start_init", 00:30:23.466 "id": 1 00:30:23.466 } 00:30:23.466 00:30:23.727 [2024-06-10 12:34:29.106612] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:23.727 INFO: response: 00:30:23.727 { 00:30:23.727 "jsonrpc": "2.0", 00:30:23.727 "id": 1, 00:30:23.727 "result": true 00:30:23.727 } 00:30:23.727 00:30:23.727 INFO: response: 00:30:23.727 { 00:30:23.727 "jsonrpc": "2.0", 00:30:23.727 "id": 1, 00:30:23.727 "result": true 00:30:23.727 } 00:30:23.727 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.727 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.727 INFO: Setting log level to 40 00:30:23.727 INFO: Setting log level to 40 00:30:23.727 INFO: Setting log level to 40 00:30:23.727 [2024-06-10 12:34:29.119853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.727 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.727 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.727 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 Nvme0n1 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 [2024-06-10 12:34:29.510415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 [ 00:30:23.988 { 00:30:23.988 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:23.988 "subtype": "Discovery", 00:30:23.988 "listen_addresses": [], 00:30:23.988 "allow_any_host": true, 00:30:23.988 "hosts": [] 00:30:23.988 }, 00:30:23.988 { 00:30:23.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.988 "subtype": "NVMe", 00:30:23.988 "listen_addresses": [ 00:30:23.988 { 00:30:23.988 "trtype": "TCP", 00:30:23.988 "adrfam": "IPv4", 00:30:23.988 "traddr": "10.0.0.2", 00:30:23.988 "trsvcid": "4420" 00:30:23.988 } 00:30:23.988 ], 00:30:23.988 "allow_any_host": true, 00:30:23.988 "hosts": [], 00:30:23.988 "serial_number": "SPDK00000000000001", 00:30:23.988 "model_number": "SPDK bdev Controller", 00:30:23.988 "max_namespaces": 1, 00:30:23.988 "min_cntlid": 1, 00:30:23.988 "max_cntlid": 65519, 00:30:23.988 "namespaces": [ 00:30:23.988 { 00:30:23.988 "nsid": 1, 00:30:23.988 "bdev_name": "Nvme0n1", 00:30:23.988 "name": "Nvme0n1", 00:30:23.988 "nguid": "36344730526054990025384500000083", 00:30:23.988 "uuid": "36344730-5260-5499-0025-384500000083" 00:30:23.988 } 00:30:23.988 ] 00:30:23.988 } 00:30:23.988 ] 00:30:23.988 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:23.988 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:23.988 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.249 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:30:24.249 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:24.249 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:24.249 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:24.249 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.510 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.510 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:24.510 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:24.510 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.510 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.510 rmmod nvme_tcp 00:30:24.510 rmmod nvme_fabrics 00:30:24.510 rmmod nvme_keyring 00:30:24.510 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.510 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:24.510 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:24.510 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 863477 ']' 00:30:24.511 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 863477 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 863477 ']' 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 863477 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 863477 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 863477' 00:30:24.511 killing process with pid 863477 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 863477 00:30:24.511 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 863477 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.772 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.772 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.772 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.314 12:34:32 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.314 00:30:27.314 real 0m13.705s 00:30:27.314 user 0m10.369s 00:30:27.314 sys 0m6.847s 00:30:27.314 12:34:32 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:27.314 12:34:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.314 ************************************ 00:30:27.314 END TEST nvmf_identify_passthru 00:30:27.314 ************************************ 00:30:27.314 12:34:32 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:27.314 12:34:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:27.314 12:34:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:27.314 12:34:32 -- common/autotest_common.sh@10 -- # set +x 00:30:27.314 ************************************ 00:30:27.314 START TEST nvmf_dif 00:30:27.314 ************************************ 00:30:27.314 12:34:32 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:27.314 * Looking for test storage... 00:30:27.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.314 12:34:32 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.314 12:34:32 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.314 12:34:32 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.314 12:34:32 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.314 12:34:32 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.314 12:34:32 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.315 12:34:32 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.315 12:34:32 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.315 12:34:32 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:27.315 12:34:32 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.315 12:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:27.315 12:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:27.315 12:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:27.315 12:34:32 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:27.315 12:34:32 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.315 12:34:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:27.315 12:34:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.315 12:34:32 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.315 12:34:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:35.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:35.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.531 12:34:40 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:35.532 Found net devices under 0000:31:00.0: cvl_0_0 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:35.532 Found net devices under 0000:31:00.1: cvl_0_1 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:35.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:30:35.532 00:30:35.532 --- 10.0.0.2 ping statistics --- 00:30:35.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.532 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:30:35.532 00:30:35.532 --- 10.0.0.1 ping statistics --- 00:30:35.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.532 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:35.532 12:34:40 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:39.743 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:39.743 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.743 12:34:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:39.743 12:34:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=870277 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 870277 00:30:39.743 12:34:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 870277 ']' 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:39.743 12:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.743 [2024-06-10 12:34:44.942854] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:30:39.743 [2024-06-10 12:34:44.942906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.743 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.743 [2024-06-10 12:34:45.017114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.743 [2024-06-10 12:34:45.086502] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.743 [2024-06-10 12:34:45.086537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.743 [2024-06-10 12:34:45.086545] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.743 [2024-06-10 12:34:45.086551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.743 [2024-06-10 12:34:45.086556] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.743 [2024-06-10 12:34:45.086575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:30:40.312 12:34:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.312 12:34:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.312 12:34:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:40.312 12:34:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.312 [2024-06-10 12:34:45.744798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.312 12:34:45 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.313 12:34:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:40.313 12:34:45 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:40.313 12:34:45 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:40.313 12:34:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.313 ************************************ 00:30:40.313 START TEST fio_dif_1_default 00:30:40.313 ************************************ 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:40.313 bdev_null0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:40.313 [2024-06-10 12:34:45.833144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.313 { 00:30:40.313 "params": { 00:30:40.313 "name": "Nvme$subsystem", 00:30:40.313 "trtype": "$TEST_TRANSPORT", 00:30:40.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.313 "adrfam": "ipv4", 00:30:40.313 "trsvcid": "$NVMF_PORT", 00:30:40.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.313 "hdgst": ${hdgst:-false}, 00:30:40.313 "ddgst": ${ddgst:-false} 00:30:40.313 }, 00:30:40.313 "method": "bdev_nvme_attach_controller" 00:30:40.313 } 00:30:40.313 EOF 00:30:40.313 )") 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.313 "params": { 00:30:40.313 "name": "Nvme0", 00:30:40.313 "trtype": "tcp", 00:30:40.313 "traddr": "10.0.0.2", 00:30:40.313 "adrfam": "ipv4", 00:30:40.313 "trsvcid": "4420", 00:30:40.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.313 "hdgst": false, 00:30:40.313 "ddgst": false 00:30:40.313 }, 00:30:40.313 "method": "bdev_nvme_attach_controller" 00:30:40.313 }' 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:40.313 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:40.596 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:40.596 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:40.596 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:40.596 12:34:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.867 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:40.867 fio-3.35 00:30:40.867 Starting 1 thread 00:30:40.867 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.107 00:30:53.107 filename0: (groupid=0, jobs=1): err= 0: pid=870806: Mon Jun 10 12:34:56 2024 00:30:53.107 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10040msec) 00:30:53.107 slat (nsec): min=5619, max=46779, avg=6443.03, stdev=2022.42 00:30:53.107 clat (usec): min=544, max=42868, avg=21208.45, stdev=20159.06 00:30:53.107 lat (usec): min=552, max=42894, avg=21214.89, stdev=20159.00 00:30:53.107 clat percentiles (usec): 00:30:53.107 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 816], 00:30:53.107 | 30.00th=[ 848], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:30:53.107 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:53.107 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:53.107 | 99.99th=[42730] 00:30:53.107 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=755.20, stdev=24.13, samples=20 00:30:53.107 iops : min= 176, max= 192, avg=188.80, stdev= 6.03, samples=20 00:30:53.107 lat (usec) : 750=2.38%, 1000=46.56% 00:30:53.107 lat (msec) : 2=0.53%, 50=50.53% 00:30:53.107 cpu : usr=95.29%, sys=4.52%, ctx=14, majf=0, minf=233 00:30:53.107 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.107 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.107 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.107 00:30:53.107 Run status group 0 (all jobs): 00:30:53.107 READ: bw=754KiB/s (772kB/s), 754KiB/s-754KiB/s (772kB/s-772kB/s), io=7568KiB (7750kB), run=10040-10040msec 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 00:30:53.107 real 0m11.242s 00:30:53.107 user 0m24.592s 00:30:53.107 sys 0m0.804s 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 ************************************ 00:30:53.107 END TEST fio_dif_1_default 00:30:53.107 ************************************ 00:30:53.107 12:34:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:53.107 12:34:57 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:53.107 12:34:57 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 ************************************ 00:30:53.107 START TEST fio_dif_1_multi_subsystems 00:30:53.107 ************************************ 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 bdev_null0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 [2024-06-10 12:34:57.153068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.107 bdev_null1 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:53.107 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.108 { 00:30:53.108 "params": { 00:30:53.108 "name": "Nvme$subsystem", 00:30:53.108 "trtype": "$TEST_TRANSPORT", 00:30:53.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.108 "adrfam": "ipv4", 00:30:53.108 "trsvcid": "$NVMF_PORT", 00:30:53.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.108 "hdgst": ${hdgst:-false}, 00:30:53.108 "ddgst": ${ddgst:-false} 00:30:53.108 }, 00:30:53.108 "method": "bdev_nvme_attach_controller" 00:30:53.108 } 00:30:53.108 EOF 00:30:53.108 )") 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.108 { 00:30:53.108 "params": { 00:30:53.108 "name": "Nvme$subsystem", 00:30:53.108 "trtype": "$TEST_TRANSPORT", 00:30:53.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.108 "adrfam": "ipv4", 00:30:53.108 "trsvcid": "$NVMF_PORT", 00:30:53.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.108 "hdgst": ${hdgst:-false}, 00:30:53.108 "ddgst": ${ddgst:-false} 00:30:53.108 }, 00:30:53.108 "method": "bdev_nvme_attach_controller" 00:30:53.108 } 00:30:53.108 EOF 00:30:53.108 )") 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.108 "params": { 00:30:53.108 "name": "Nvme0", 00:30:53.108 "trtype": "tcp", 00:30:53.108 "traddr": "10.0.0.2", 00:30:53.108 "adrfam": "ipv4", 00:30:53.108 "trsvcid": "4420", 00:30:53.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.108 "hdgst": false, 00:30:53.108 "ddgst": false 00:30:53.108 }, 00:30:53.108 "method": "bdev_nvme_attach_controller" 00:30:53.108 },{ 00:30:53.108 "params": { 00:30:53.108 "name": "Nvme1", 00:30:53.108 "trtype": "tcp", 00:30:53.108 "traddr": "10.0.0.2", 00:30:53.108 "adrfam": "ipv4", 00:30:53.108 "trsvcid": "4420", 00:30:53.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.108 "hdgst": false, 00:30:53.108 "ddgst": false 00:30:53.108 }, 00:30:53.108 "method": "bdev_nvme_attach_controller" 00:30:53.108 }' 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.108 12:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.108 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.108 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.108 fio-3.35 00:30:53.108 Starting 2 threads 00:30:53.108 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.095 00:31:03.095 filename0: (groupid=0, jobs=1): err= 0: pid=873016: Mon Jun 10 12:35:08 2024 00:31:03.095 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10035msec) 00:31:03.095 slat (nsec): min=5629, max=54108, avg=6545.65, stdev=2089.40 00:31:03.095 clat (usec): min=621, max=42072, avg=21018.85, stdev=20200.85 00:31:03.095 lat (usec): min=630, max=42078, avg=21025.40, stdev=20200.79 00:31:03.095 clat percentiles (usec): 00:31:03.095 | 1.00th=[ 668], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 799], 00:31:03.095 | 30.00th=[ 930], 40.00th=[ 971], 50.00th=[ 1565], 60.00th=[41157], 00:31:03.095 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:03.095 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:03.095 | 99.99th=[42206] 00:31:03.095 bw ( KiB/s): min= 704, max= 768, per=66.66%, avg=761.60, stdev=19.70, samples=20 00:31:03.095 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:31:03.095 lat (usec) : 750=8.96%, 1000=36.43% 00:31:03.095 lat (msec) : 2=4.72%, 50=49.90% 00:31:03.095 cpu : usr=96.91%, sys=2.89%, ctx=15, majf=0, minf=119 00:31:03.095 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.095 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.095 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:03.095 filename1: (groupid=0, jobs=1): err= 0: pid=873017: Mon Jun 10 12:35:08 2024 00:31:03.095 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10009msec) 00:31:03.095 slat (nsec): min=5627, max=32317, avg=6607.99, stdev=1615.56 00:31:03.095 clat (usec): min=40950, max=42493, avg=41858.77, stdev=322.88 00:31:03.095 lat (usec): min=40956, max=42525, avg=41865.38, stdev=322.94 00:31:03.095 clat percentiles (usec): 00:31:03.095 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:31:03.095 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:03.095 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:03.095 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:03.095 | 99.99th=[42730] 00:31:03.095 bw ( KiB/s): min= 352, max= 384, per=33.29%, avg=380.80, stdev= 9.85, samples=20 00:31:03.095 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:03.095 lat (msec) : 50=100.00% 00:31:03.095 cpu : usr=96.75%, sys=3.05%, ctx=14, majf=0, minf=130 00:31:03.095 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.096 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.096 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:03.096 00:31:03.096 Run status group 0 (all jobs): 00:31:03.096 READ: bw=1142KiB/s (1169kB/s), 382KiB/s-761KiB/s (391kB/s-779kB/s), io=11.2MiB (11.7MB), run=10009-10035msec 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 00:31:03.096 real 0m11.469s 00:31:03.096 user 0m36.654s 00:31:03.096 sys 0m0.943s 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 ************************************ 00:31:03.096 END TEST fio_dif_1_multi_subsystems 00:31:03.096 ************************************ 00:31:03.096 12:35:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:03.096 12:35:08 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:03.096 12:35:08 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 ************************************ 00:31:03.096 START TEST fio_dif_rand_params 00:31:03.096 ************************************ 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 bdev_null0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.096 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.356 [2024-06-10 12:35:08.703574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.356 { 00:31:03.356 "params": { 00:31:03.356 "name": "Nvme$subsystem", 00:31:03.356 "trtype": "$TEST_TRANSPORT", 00:31:03.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.356 "adrfam": "ipv4", 00:31:03.356 "trsvcid": "$NVMF_PORT", 00:31:03.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.356 "hdgst": ${hdgst:-false}, 00:31:03.356 "ddgst": ${ddgst:-false} 00:31:03.356 }, 00:31:03.356 "method": "bdev_nvme_attach_controller" 00:31:03.356 } 00:31:03.356 EOF 00:31:03.356 )") 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:03.356 "params": { 00:31:03.356 "name": "Nvme0", 00:31:03.356 "trtype": "tcp", 00:31:03.356 "traddr": "10.0.0.2", 00:31:03.356 "adrfam": "ipv4", 00:31:03.356 "trsvcid": "4420", 00:31:03.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.356 "hdgst": false, 00:31:03.356 "ddgst": false 00:31:03.356 }, 00:31:03.356 "method": "bdev_nvme_attach_controller" 00:31:03.356 }' 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.356 12:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.617 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:03.617 ... 00:31:03.617 fio-3.35 00:31:03.617 Starting 3 threads 00:31:03.617 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.196 00:31:10.196 filename0: (groupid=0, jobs=1): err= 0: pid=875516: Mon Jun 10 12:35:14 2024 00:31:10.196 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5010msec) 00:31:10.196 slat (nsec): min=5659, max=32127, avg=6279.77, stdev=1377.50 00:31:10.196 clat (usec): min=5886, max=92511, avg=14357.26, stdev=12740.33 00:31:10.196 lat (usec): min=5892, max=92517, avg=14363.54, stdev=12740.30 00:31:10.196 clat percentiles (usec): 00:31:10.196 | 1.00th=[ 6456], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8586], 00:31:10.196 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[11207], 00:31:10.196 | 70.00th=[11994], 80.00th=[13042], 90.00th=[15664], 95.00th=[49546], 00:31:10.196 | 99.00th=[53216], 99.50th=[55837], 99.90th=[90702], 99.95th=[92799], 00:31:10.196 | 99.99th=[92799] 00:31:10.196 bw ( KiB/s): min=18432, max=35840, per=33.70%, avg=26700.80, stdev=5863.90, samples=10 00:31:10.196 iops : min= 144, max= 280, avg=208.60, stdev=45.81, samples=10 00:31:10.196 lat (msec) : 10=40.34%, 20=50.10%, 50=4.68%, 100=4.88% 00:31:10.196 cpu : usr=96.31%, sys=3.45%, ctx=14, majf=0, minf=86 00:31:10.196 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.196 filename0: (groupid=0, jobs=1): err= 0: pid=875517: Mon Jun 10 12:35:14 2024 00:31:10.196 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(131MiB/5044msec) 00:31:10.196 slat (nsec): min=5664, max=31840, avg=8407.64, stdev=1448.40 00:31:10.196 clat (usec): min=5475, max=91952, avg=14429.55, stdev=12945.92 00:31:10.196 lat (usec): min=5481, max=91961, avg=14437.95, stdev=12945.79 00:31:10.196 clat percentiles (usec): 00:31:10.196 | 1.00th=[ 6128], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8291], 00:31:10.196 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11600], 00:31:10.196 | 70.00th=[12649], 80.00th=[14484], 90.00th=[17171], 95.00th=[49546], 00:31:10.196 | 99.00th=[56361], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:31:10.196 | 99.99th=[91751] 00:31:10.196 bw ( KiB/s): min=12288, max=38144, per=33.70%, avg=26700.80, stdev=8365.32, samples=10 00:31:10.196 iops : min= 96, max= 298, avg=208.60, stdev=65.35, samples=10 00:31:10.196 lat (msec) : 10=40.67%, 20=50.53%, 50=4.21%, 100=4.59% 00:31:10.196 cpu : usr=95.97%, sys=3.81%, ctx=13, majf=0, minf=83 00:31:10.196 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.196 filename0: (groupid=0, jobs=1): err= 0: pid=875518: Mon Jun 10 12:35:14 2024 00:31:10.196 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5039msec) 00:31:10.196 slat (nsec): min=5663, max=33178, avg=8188.27, stdev=1701.90 00:31:10.196 clat (usec): min=5635, max=92258, avg=14650.95, stdev=12774.91 00:31:10.196 lat (usec): min=5641, max=92266, avg=14659.14, stdev=12774.90 00:31:10.196 clat percentiles (usec): 00:31:10.196 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8455], 00:31:10.196 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10552], 60.00th=[11207], 00:31:10.196 | 70.00th=[11994], 80.00th=[13435], 90.00th=[46924], 95.00th=[50070], 00:31:10.196 | 99.00th=[52691], 99.50th=[53740], 99.90th=[87557], 99.95th=[91751], 00:31:10.196 | 99.99th=[91751] 00:31:10.196 bw ( KiB/s): min=22016, max=31807, per=33.23%, avg=26323.10, stdev=3530.87, samples=10 00:31:10.196 iops : min= 172, max= 248, avg=205.60, stdev=27.50, samples=10 00:31:10.196 lat (msec) : 10=40.06%, 20=49.47%, 50=5.72%, 100=4.75% 00:31:10.196 cpu : usr=96.35%, sys=3.39%, ctx=29, majf=0, minf=111 00:31:10.196 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.196 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.196 00:31:10.196 Run status group 0 (all jobs): 00:31:10.196 READ: bw=77.4MiB/s (81.1MB/s), 25.6MiB/s-26.1MiB/s (26.8MB/s-27.4MB/s), io=390MiB (409MB), run=5010-5044msec 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.196 bdev_null0 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.196 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 [2024-06-10 12:35:14.962273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 bdev_null1 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 bdev_null2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.197 { 00:31:10.197 "params": { 00:31:10.197 "name": "Nvme$subsystem", 00:31:10.197 "trtype": "$TEST_TRANSPORT", 00:31:10.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.197 "adrfam": "ipv4", 00:31:10.197 "trsvcid": "$NVMF_PORT", 00:31:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.197 "hdgst": ${hdgst:-false}, 00:31:10.197 "ddgst": ${ddgst:-false} 00:31:10.197 }, 00:31:10.197 "method": "bdev_nvme_attach_controller" 00:31:10.197 } 00:31:10.197 EOF 00:31:10.197 )") 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.197 { 00:31:10.197 "params": { 00:31:10.197 "name": "Nvme$subsystem", 00:31:10.197 "trtype": "$TEST_TRANSPORT", 00:31:10.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.197 "adrfam": "ipv4", 00:31:10.197 "trsvcid": "$NVMF_PORT", 00:31:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.197 "hdgst": ${hdgst:-false}, 00:31:10.197 "ddgst": ${ddgst:-false} 00:31:10.197 }, 00:31:10.197 "method": "bdev_nvme_attach_controller" 00:31:10.197 } 00:31:10.197 EOF 00:31:10.197 )") 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.197 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.197 { 00:31:10.197 "params": { 00:31:10.197 "name": "Nvme$subsystem", 00:31:10.197 "trtype": "$TEST_TRANSPORT", 00:31:10.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.197 "adrfam": "ipv4", 00:31:10.197 "trsvcid": "$NVMF_PORT", 00:31:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.198 "hdgst": ${hdgst:-false}, 00:31:10.198 "ddgst": ${ddgst:-false} 00:31:10.198 }, 00:31:10.198 "method": "bdev_nvme_attach_controller" 00:31:10.198 } 00:31:10.198 EOF 00:31:10.198 )") 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:10.198 "params": { 00:31:10.198 "name": "Nvme0", 00:31:10.198 "trtype": "tcp", 00:31:10.198 "traddr": "10.0.0.2", 00:31:10.198 "adrfam": "ipv4", 00:31:10.198 "trsvcid": "4420", 00:31:10.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.198 "hdgst": false, 00:31:10.198 "ddgst": false 00:31:10.198 }, 00:31:10.198 "method": "bdev_nvme_attach_controller" 00:31:10.198 },{ 00:31:10.198 "params": { 00:31:10.198 "name": "Nvme1", 00:31:10.198 "trtype": "tcp", 00:31:10.198 "traddr": "10.0.0.2", 00:31:10.198 "adrfam": "ipv4", 00:31:10.198 "trsvcid": "4420", 00:31:10.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.198 "hdgst": false, 00:31:10.198 "ddgst": false 00:31:10.198 }, 00:31:10.198 "method": "bdev_nvme_attach_controller" 00:31:10.198 },{ 00:31:10.198 "params": { 00:31:10.198 "name": "Nvme2", 00:31:10.198 "trtype": "tcp", 00:31:10.198 "traddr": "10.0.0.2", 00:31:10.198 "adrfam": "ipv4", 00:31:10.198 "trsvcid": "4420", 00:31:10.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:10.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:10.198 "hdgst": false, 00:31:10.198 "ddgst": false 00:31:10.198 }, 00:31:10.198 "method": "bdev_nvme_attach_controller" 00:31:10.198 }' 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.198 12:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.198 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.198 ... 00:31:10.198 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.198 ... 00:31:10.198 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.198 ... 00:31:10.198 fio-3.35 00:31:10.198 Starting 24 threads 00:31:10.198 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.492 00:31:22.492 filename0: (groupid=0, jobs=1): err= 0: pid=876894: Mon Jun 10 12:35:26 2024 00:31:22.492 read: IOPS=503, BW=2016KiB/s (2064kB/s)(19.7MiB/10001msec) 00:31:22.492 slat (nsec): min=5825, max=82906, avg=18333.68, stdev=11779.49 00:31:22.492 clat (usec): min=1519, max=36935, avg=31585.40, stdev=4962.99 00:31:22.492 lat (usec): min=1537, max=36943, avg=31603.74, stdev=4962.95 00:31:22.492 clat percentiles (usec): 00:31:22.492 | 1.00th=[ 2900], 5.00th=[30802], 10.00th=[31851], 20.00th=[31851], 00:31:22.492 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.492 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:22.492 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:31:22.492 | 99.99th=[36963] 00:31:22.492 bw ( KiB/s): min= 1916, max= 2944, per=4.25%, avg=2019.68, stdev=232.38, samples=19 00:31:22.492 iops : min= 479, max= 736, avg=504.84, stdev=58.09, samples=19 00:31:22.492 lat (msec) : 2=0.60%, 4=1.43%, 10=0.52%, 20=0.36%, 50=97.10% 00:31:22.492 cpu : usr=98.15%, sys=1.11%, ctx=636, majf=0, minf=0 00:31:22.492 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:22.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.492 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.492 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.492 filename0: (groupid=0, jobs=1): err= 0: pid=876895: Mon Jun 10 12:35:26 2024 00:31:22.492 read: IOPS=521, BW=2088KiB/s (2138kB/s)(20.4MiB/10011msec) 00:31:22.492 slat (nsec): min=5789, max=55029, avg=8358.12, stdev=4327.67 00:31:22.492 clat (usec): min=10200, max=51470, avg=30584.65, stdev=4495.19 00:31:22.492 lat (usec): min=10211, max=51481, avg=30593.01, stdev=4495.53 00:31:22.492 clat percentiles (usec): 00:31:22.492 | 1.00th=[14222], 5.00th=[19530], 10.00th=[22414], 20.00th=[31327], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[33817], 00:31:22.493 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:31:22.493 | 99.99th=[51643] 00:31:22.493 bw ( KiB/s): min= 1916, max= 2864, per=4.40%, avg=2091.00, stdev=288.99, samples=19 00:31:22.493 iops : min= 479, max= 716, avg=522.68, stdev=72.19, samples=19 00:31:22.493 lat (msec) : 20=5.76%, 50=94.20%, 100=0.04% 00:31:22.493 cpu : usr=99.16%, sys=0.57%, ctx=11, majf=0, minf=9 00:31:22.493 IO depths : 1=5.0%, 2=10.1%, 4=21.4%, 8=55.9%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=5225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876897: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10006msec) 00:31:22.493 slat (nsec): min=5799, max=84513, avg=24019.47, stdev=13451.01 00:31:22.493 clat (usec): min=8563, max=53825, avg=32278.04, stdev=2661.58 00:31:22.493 lat (usec): min=8570, max=53843, avg=32302.06, stdev=2662.27 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[19792], 5.00th=[31327], 10.00th=[31851], 20.00th=[31851], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:31:22.493 | 99.00th=[37487], 99.50th=[43254], 99.90th=[53740], 99.95th=[53740], 00:31:22.493 | 99.99th=[53740] 00:31:22.493 bw ( KiB/s): min= 1795, max= 2080, per=4.13%, avg=1961.21, stdev=72.17, samples=19 00:31:22.493 iops : min= 448, max= 520, avg=490.26, stdev=18.14, samples=19 00:31:22.493 lat (msec) : 10=0.28%, 20=0.77%, 50=98.62%, 100=0.32% 00:31:22.493 cpu : usr=99.16%, sys=0.54%, ctx=97, majf=0, minf=9 00:31:22.493 IO depths : 1=3.5%, 2=9.6%, 4=24.4%, 8=53.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876898: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10011msec) 00:31:22.493 slat (nsec): min=5825, max=66539, avg=12930.63, stdev=7928.80 00:31:22.493 clat (usec): min=20390, max=47362, avg=32495.16, stdev=1650.21 00:31:22.493 lat (usec): min=20396, max=47381, avg=32508.09, stdev=1650.13 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[24511], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:22.493 | 99.00th=[35914], 99.50th=[36963], 99.90th=[47449], 99.95th=[47449], 00:31:22.493 | 99.99th=[47449] 00:31:22.493 bw ( KiB/s): min= 1792, max= 2052, per=4.13%, avg=1959.95, stdev=74.75, samples=19 00:31:22.493 iops : min= 448, max= 513, avg=489.95, stdev=18.64, samples=19 00:31:22.493 lat (msec) : 50=100.00% 00:31:22.493 cpu : usr=99.27%, sys=0.48%, ctx=16, majf=0, minf=9 00:31:22.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876899: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:31:22.493 slat (nsec): min=6078, max=72978, avg=24356.22, stdev=12126.01 00:31:22.493 clat (usec): min=7683, max=53549, avg=32371.56, stdev=2358.56 00:31:22.493 lat (usec): min=7689, max=53568, avg=32395.92, stdev=2358.95 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[29230], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:22.493 | 99.00th=[36963], 99.50th=[37487], 99.90th=[53740], 99.95th=[53740], 00:31:22.493 | 99.99th=[53740] 00:31:22.493 bw ( KiB/s): min= 1795, max= 2052, per=4.11%, avg=1953.63, stdev=72.08, samples=19 00:31:22.493 iops : min= 448, max= 513, avg=488.37, stdev=18.11, samples=19 00:31:22.493 lat (msec) : 10=0.33%, 20=0.33%, 50=99.02%, 100=0.33% 00:31:22.493 cpu : usr=99.07%, sys=0.67%, ctx=13, majf=0, minf=9 00:31:22.493 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876900: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10013msec) 00:31:22.493 slat (nsec): min=5532, max=53004, avg=10041.10, stdev=5882.84 00:31:22.493 clat (usec): min=18752, max=49149, avg=32535.97, stdev=1689.62 00:31:22.493 lat (usec): min=18758, max=49166, avg=32546.01, stdev=1689.36 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[25035], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:22.493 | 99.00th=[36439], 99.50th=[36963], 99.90th=[49021], 99.95th=[49021], 00:31:22.493 | 99.99th=[49021] 00:31:22.493 bw ( KiB/s): min= 1904, max= 2048, per=4.13%, avg=1959.47, stdev=59.31, samples=19 00:31:22.493 iops : min= 476, max= 512, avg=489.79, stdev=14.72, samples=19 00:31:22.493 lat (msec) : 20=0.08%, 50=99.92% 00:31:22.493 cpu : usr=99.19%, sys=0.55%, ctx=13, majf=0, minf=9 00:31:22.493 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876901: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.3MiB/10020msec) 00:31:22.493 slat (nsec): min=5792, max=76414, avg=16608.13, stdev=11336.43 00:31:22.493 clat (usec): min=15921, max=73160, avg=32348.69, stdev=3394.26 00:31:22.493 lat (usec): min=15952, max=73193, avg=32365.30, stdev=3394.67 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[21627], 5.00th=[26870], 10.00th=[31327], 20.00th=[31851], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[36439], 00:31:22.493 | 99.00th=[44827], 99.50th=[47449], 99.90th=[57934], 99.95th=[57934], 00:31:22.493 | 99.99th=[72877] 00:31:22.493 bw ( KiB/s): min= 1824, max= 2048, per=4.13%, avg=1960.11, stdev=68.82, samples=19 00:31:22.493 iops : min= 456, max= 512, avg=489.95, stdev=17.11, samples=19 00:31:22.493 lat (msec) : 20=0.16%, 50=99.43%, 100=0.41% 00:31:22.493 cpu : usr=98.90%, sys=0.82%, ctx=10, majf=0, minf=9 00:31:22.493 IO depths : 1=4.6%, 2=9.4%, 4=20.2%, 8=57.5%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=92.8%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename0: (groupid=0, jobs=1): err= 0: pid=876902: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10003msec) 00:31:22.493 slat (nsec): min=5833, max=90407, avg=27486.19, stdev=15174.12 00:31:22.493 clat (usec): min=15405, max=57698, avg=32442.57, stdev=2009.03 00:31:22.493 lat (usec): min=15424, max=57714, avg=32470.06, stdev=2008.31 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.493 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:22.493 | 99.00th=[36963], 99.50th=[37487], 99.90th=[57410], 99.95th=[57934], 00:31:22.493 | 99.99th=[57934] 00:31:22.493 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1952.32, stdev=71.65, samples=19 00:31:22.493 iops : min= 448, max= 512, avg=488.00, stdev=17.81, samples=19 00:31:22.493 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:31:22.493 cpu : usr=98.87%, sys=0.72%, ctx=127, majf=0, minf=9 00:31:22.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename1: (groupid=0, jobs=1): err= 0: pid=876903: Mon Jun 10 12:35:26 2024 00:31:22.493 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10014msec) 00:31:22.493 slat (nsec): min=5798, max=81689, avg=14558.55, stdev=10085.81 00:31:22.493 clat (usec): min=13099, max=52038, avg=32410.99, stdev=3232.85 00:31:22.493 lat (usec): min=13105, max=52062, avg=32425.55, stdev=3233.02 00:31:22.493 clat percentiles (usec): 00:31:22.493 | 1.00th=[18744], 5.00th=[27657], 10.00th=[31851], 20.00th=[31851], 00:31:22.493 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.493 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[35914], 00:31:22.493 | 99.00th=[44827], 99.50th=[48497], 99.90th=[52167], 99.95th=[52167], 00:31:22.493 | 99.99th=[52167] 00:31:22.493 bw ( KiB/s): min= 1792, max= 2144, per=4.12%, avg=1957.21, stdev=79.42, samples=19 00:31:22.493 iops : min= 448, max= 536, avg=489.26, stdev=19.81, samples=19 00:31:22.493 lat (msec) : 20=1.14%, 50=98.54%, 100=0.33% 00:31:22.493 cpu : usr=99.09%, sys=0.62%, ctx=10, majf=0, minf=9 00:31:22.493 IO depths : 1=5.0%, 2=10.0%, 4=20.8%, 8=56.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:22.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.493 issued rwts: total=4922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.493 filename1: (groupid=0, jobs=1): err= 0: pid=876904: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:31:22.494 slat (nsec): min=5807, max=85926, avg=12441.16, stdev=7938.74 00:31:22.494 clat (usec): min=18483, max=53777, avg=32594.30, stdev=2332.45 00:31:22.494 lat (usec): min=18495, max=53798, avg=32606.74, stdev=2332.09 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:31:22.494 | 99.00th=[43254], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:31:22.494 | 99.99th=[53740] 00:31:22.494 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1952.79, stdev=70.50, samples=19 00:31:22.494 iops : min= 448, max= 512, avg=488.16, stdev=17.57, samples=19 00:31:22.494 lat (msec) : 20=0.41%, 50=99.26%, 100=0.33% 00:31:22.494 cpu : usr=99.13%, sys=0.57%, ctx=17, majf=0, minf=9 00:31:22.494 IO depths : 1=2.7%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876905: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:31:22.494 slat (nsec): min=5705, max=97497, avg=27220.30, stdev=16784.79 00:31:22.494 clat (usec): min=22709, max=48260, avg=32436.01, stdev=1479.88 00:31:22.494 lat (usec): min=22726, max=48276, avg=32463.23, stdev=1479.35 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:22.494 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34866], 00:31:22.494 | 99.00th=[37487], 99.50th=[41157], 99.90th=[47973], 99.95th=[48497], 00:31:22.494 | 99.99th=[48497] 00:31:22.494 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1953.00, stdev=72.00, samples=19 00:31:22.494 iops : min= 448, max= 512, avg=488.21, stdev=18.09, samples=19 00:31:22.494 lat (msec) : 50=100.00% 00:31:22.494 cpu : usr=98.55%, sys=0.79%, ctx=111, majf=0, minf=9 00:31:22.494 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876906: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=491, BW=1967KiB/s (2015kB/s)(19.2MiB/10019msec) 00:31:22.494 slat (nsec): min=5817, max=68879, avg=16504.91, stdev=11831.63 00:31:22.494 clat (usec): min=10309, max=36924, avg=32386.34, stdev=1822.97 00:31:22.494 lat (usec): min=10319, max=36943, avg=32402.84, stdev=1821.99 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[24511], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:22.494 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:31:22.494 | 99.99th=[36963] 00:31:22.494 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1966.05, stdev=63.77, samples=19 00:31:22.494 iops : min= 479, max= 512, avg=491.47, stdev=15.89, samples=19 00:31:22.494 lat (msec) : 20=0.32%, 50=99.68% 00:31:22.494 cpu : usr=99.08%, sys=0.62%, ctx=59, majf=0, minf=9 00:31:22.494 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876907: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10021msec) 00:31:22.494 slat (nsec): min=5797, max=85715, avg=15041.10, stdev=12468.54 00:31:22.494 clat (usec): min=15941, max=52530, avg=32325.58, stdev=3632.49 00:31:22.494 lat (usec): min=15948, max=52537, avg=32340.62, stdev=3633.38 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[19792], 5.00th=[25560], 10.00th=[29492], 20.00th=[31851], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[36963], 00:31:22.494 | 99.00th=[45351], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:31:22.494 | 99.99th=[52691] 00:31:22.494 bw ( KiB/s): min= 1916, max= 2128, per=4.15%, avg=1969.35, stdev=68.38, samples=20 00:31:22.494 iops : min= 479, max= 532, avg=492.30, stdev=17.05, samples=20 00:31:22.494 lat (msec) : 20=1.01%, 50=98.54%, 100=0.45% 00:31:22.494 cpu : usr=98.96%, sys=0.72%, ctx=46, majf=0, minf=11 00:31:22.494 IO depths : 1=3.4%, 2=7.6%, 4=18.7%, 8=60.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=92.6%, 8=2.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876909: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=490, BW=1963KiB/s (2011kB/s)(19.2MiB/10007msec) 00:31:22.494 slat (nsec): min=5856, max=97971, avg=26833.38, stdev=15319.48 00:31:22.494 clat (usec): min=8275, max=54088, avg=32338.51, stdev=2352.18 00:31:22.494 lat (usec): min=8296, max=54106, avg=32365.34, stdev=2352.25 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[29230], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:22.494 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:22.494 | 99.00th=[36963], 99.50th=[37487], 99.90th=[54264], 99.95th=[54264], 00:31:22.494 | 99.99th=[54264] 00:31:22.494 bw ( KiB/s): min= 1795, max= 2052, per=4.11%, avg=1953.63, stdev=72.08, samples=19 00:31:22.494 iops : min= 448, max= 513, avg=488.37, stdev=18.11, samples=19 00:31:22.494 lat (msec) : 10=0.33%, 20=0.33%, 50=99.02%, 100=0.33% 00:31:22.494 cpu : usr=99.20%, sys=0.49%, ctx=28, majf=0, minf=9 00:31:22.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876910: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10012msec) 00:31:22.494 slat (nsec): min=5806, max=73075, avg=17433.86, stdev=11436.31 00:31:22.494 clat (usec): min=1472, max=55057, avg=31835.95, stdev=4558.30 00:31:22.494 lat (usec): min=1485, max=55073, avg=31853.39, stdev=4558.46 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[ 3163], 5.00th=[29492], 10.00th=[31851], 20.00th=[31851], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:31:22.494 | 99.00th=[36963], 99.50th=[46924], 99.90th=[49021], 99.95th=[53740], 00:31:22.494 | 99.99th=[55313] 00:31:22.494 bw ( KiB/s): min= 1916, max= 2560, per=4.21%, avg=1999.95, stdev=148.71, samples=19 00:31:22.494 iops : min= 479, max= 640, avg=499.95, stdev=37.17, samples=19 00:31:22.494 lat (msec) : 2=0.04%, 4=1.24%, 10=0.32%, 20=1.32%, 50=97.00% 00:31:22.494 lat (msec) : 100=0.08% 00:31:22.494 cpu : usr=98.23%, sys=1.16%, ctx=100, majf=0, minf=11 00:31:22.494 IO depths : 1=5.1%, 2=11.3%, 4=24.7%, 8=51.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename1: (groupid=0, jobs=1): err= 0: pid=876911: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:31:22.494 slat (nsec): min=5812, max=73379, avg=17982.78, stdev=11366.07 00:31:22.494 clat (usec): min=11326, max=58320, avg=32527.04, stdev=2174.28 00:31:22.494 lat (usec): min=11332, max=58337, avg=32545.02, stdev=2173.58 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[30540], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:22.494 | 99.00th=[36439], 99.50th=[36963], 99.90th=[58459], 99.95th=[58459], 00:31:22.494 | 99.99th=[58459] 00:31:22.494 bw ( KiB/s): min= 1792, max= 2052, per=4.11%, avg=1953.58, stdev=83.36, samples=19 00:31:22.494 iops : min= 448, max= 513, avg=488.32, stdev=20.88, samples=19 00:31:22.494 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:31:22.494 cpu : usr=99.08%, sys=0.64%, ctx=64, majf=0, minf=9 00:31:22.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:22.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.494 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.494 filename2: (groupid=0, jobs=1): err= 0: pid=876912: Mon Jun 10 12:35:26 2024 00:31:22.494 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:31:22.494 slat (nsec): min=5809, max=83452, avg=19993.08, stdev=14634.39 00:31:22.494 clat (usec): min=23482, max=47700, avg=32543.58, stdev=1387.26 00:31:22.494 lat (usec): min=23492, max=47718, avg=32563.57, stdev=1385.66 00:31:22.494 clat percentiles (usec): 00:31:22.494 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.494 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.494 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:31:22.494 | 99.00th=[37487], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:31:22.494 | 99.99th=[47449] 00:31:22.495 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1953.00, stdev=72.00, samples=19 00:31:22.495 iops : min= 448, max= 512, avg=488.21, stdev=18.09, samples=19 00:31:22.495 lat (msec) : 50=100.00% 00:31:22.495 cpu : usr=98.23%, sys=1.09%, ctx=79, majf=0, minf=9 00:31:22.495 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876913: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=492, BW=1969KiB/s (2017kB/s)(19.2MiB/10009msec) 00:31:22.495 slat (nsec): min=5815, max=63107, avg=14261.45, stdev=9761.30 00:31:22.495 clat (usec): min=8766, max=37001, avg=32379.16, stdev=1874.97 00:31:22.495 lat (usec): min=8776, max=37008, avg=32393.42, stdev=1874.03 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[25297], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:31:22.495 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:31:22.495 | 99.99th=[36963] 00:31:22.495 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1966.05, stdev=63.77, samples=19 00:31:22.495 iops : min= 479, max= 512, avg=491.47, stdev=15.89, samples=19 00:31:22.495 lat (msec) : 10=0.04%, 20=0.28%, 50=99.68% 00:31:22.495 cpu : usr=98.46%, sys=0.88%, ctx=136, majf=0, minf=9 00:31:22.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876914: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.5MiB/10013msec) 00:31:22.495 slat (nsec): min=5782, max=69442, avg=12689.17, stdev=9895.82 00:31:22.495 clat (usec): min=14226, max=65159, avg=31921.31, stdev=4311.45 00:31:22.495 lat (usec): min=14232, max=65182, avg=31934.00, stdev=4312.01 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[16450], 5.00th=[22152], 10.00th=[27132], 20.00th=[31851], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[36963], 00:31:22.495 | 99.00th=[45876], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:31:22.495 | 99.99th=[65274] 00:31:22.495 bw ( KiB/s): min= 1792, max= 2251, per=4.19%, avg=1989.00, stdev=117.70, samples=19 00:31:22.495 iops : min= 448, max= 562, avg=497.21, stdev=29.33, samples=19 00:31:22.495 lat (msec) : 20=1.98%, 50=97.82%, 100=0.20% 00:31:22.495 cpu : usr=98.40%, sys=0.99%, ctx=43, majf=0, minf=9 00:31:22.495 IO depths : 1=1.1%, 2=6.0%, 4=20.0%, 8=61.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=5001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876915: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10011msec) 00:31:22.495 slat (nsec): min=5795, max=61002, avg=8542.96, stdev=4485.90 00:31:22.495 clat (usec): min=8361, max=36225, avg=29896.73, stdev=4999.90 00:31:22.495 lat (usec): min=8376, max=36232, avg=29905.27, stdev=5000.38 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[16057], 5.00th=[17171], 10.00th=[21103], 20.00th=[26346], 00:31:22.495 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:22.495 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33817], 00:31:22.495 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:31:22.495 | 99.99th=[36439] 00:31:22.495 bw ( KiB/s): min= 1916, max= 2560, per=4.49%, avg=2134.32, stdev=200.04, samples=19 00:31:22.495 iops : min= 479, max= 640, avg=533.42, stdev=49.93, samples=19 00:31:22.495 lat (msec) : 10=0.04%, 20=8.05%, 50=91.92% 00:31:22.495 cpu : usr=98.82%, sys=0.81%, ctx=42, majf=0, minf=9 00:31:22.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876916: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10002msec) 00:31:22.495 slat (nsec): min=5816, max=74054, avg=10074.77, stdev=6276.64 00:31:22.495 clat (usec): min=23958, max=48318, avg=32597.40, stdev=1441.83 00:31:22.495 lat (usec): min=23964, max=48335, avg=32607.47, stdev=1441.24 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:31:22.495 | 99.00th=[36963], 99.50th=[38011], 99.90th=[48497], 99.95th=[48497], 00:31:22.495 | 99.99th=[48497] 00:31:22.495 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1953.00, stdev=72.00, samples=19 00:31:22.495 iops : min= 448, max= 512, avg=488.21, stdev=18.09, samples=19 00:31:22.495 lat (msec) : 50=100.00% 00:31:22.495 cpu : usr=99.15%, sys=0.54%, ctx=56, majf=0, minf=9 00:31:22.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876918: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10001msec) 00:31:22.495 slat (nsec): min=5824, max=83114, avg=15756.56, stdev=12651.75 00:31:22.495 clat (usec): min=14056, max=57427, avg=32562.44, stdev=2576.32 00:31:22.495 lat (usec): min=14063, max=57443, avg=32578.19, stdev=2575.30 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[23725], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:31:22.495 | 99.00th=[41681], 99.50th=[43254], 99.90th=[57410], 99.95th=[57410], 00:31:22.495 | 99.99th=[57410] 00:31:22.495 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1952.32, stdev=71.65, samples=19 00:31:22.495 iops : min= 448, max= 512, avg=488.00, stdev=17.81, samples=19 00:31:22.495 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:31:22.495 cpu : usr=99.12%, sys=0.61%, ctx=15, majf=0, minf=9 00:31:22.495 IO depths : 1=5.5%, 2=11.5%, 4=24.3%, 8=51.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876919: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10008msec) 00:31:22.495 slat (nsec): min=5637, max=76798, avg=22533.14, stdev=12043.59 00:31:22.495 clat (usec): min=7625, max=55004, avg=32410.24, stdev=2374.04 00:31:22.495 lat (usec): min=7630, max=55022, avg=32432.78, stdev=2374.03 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:31:22.495 | 99.00th=[36963], 99.50th=[37487], 99.90th=[54789], 99.95th=[54789], 00:31:22.495 | 99.99th=[54789] 00:31:22.495 bw ( KiB/s): min= 1792, max= 2052, per=4.11%, avg=1953.47, stdev=72.45, samples=19 00:31:22.495 iops : min= 448, max= 513, avg=488.37, stdev=18.11, samples=19 00:31:22.495 lat (msec) : 10=0.33%, 20=0.33%, 50=99.02%, 100=0.33% 00:31:22.495 cpu : usr=98.84%, sys=0.81%, ctx=98, majf=0, minf=9 00:31:22.495 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 filename2: (groupid=0, jobs=1): err= 0: pid=876920: Mon Jun 10 12:35:26 2024 00:31:22.495 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:31:22.495 slat (nsec): min=5808, max=81645, avg=23878.00, stdev=11258.80 00:31:22.495 clat (usec): min=7722, max=53744, avg=32374.58, stdev=2330.01 00:31:22.495 lat (usec): min=7728, max=53761, avg=32398.45, stdev=2330.17 00:31:22.495 clat percentiles (usec): 00:31:22.495 | 1.00th=[29492], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:22.495 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:22.495 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:31:22.495 | 99.00th=[36963], 99.50th=[37487], 99.90th=[53740], 99.95th=[53740], 00:31:22.495 | 99.99th=[53740] 00:31:22.495 bw ( KiB/s): min= 1795, max= 2052, per=4.11%, avg=1953.63, stdev=72.08, samples=19 00:31:22.495 iops : min= 448, max= 513, avg=488.37, stdev=18.11, samples=19 00:31:22.495 lat (msec) : 10=0.33%, 20=0.33%, 50=99.02%, 100=0.33% 00:31:22.495 cpu : usr=99.20%, sys=0.51%, ctx=40, majf=0, minf=9 00:31:22.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.495 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.495 00:31:22.495 Run status group 0 (all jobs): 00:31:22.495 READ: bw=46.4MiB/s (48.6MB/s), 1956KiB/s-2135KiB/s (2003kB/s-2186kB/s), io=465MiB (487MB), run=10001-10021msec 00:31:22.495 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 bdev_null0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 [2024-06-10 12:35:26.755309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 bdev_null1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.496 { 00:31:22.496 "params": { 00:31:22.496 "name": "Nvme$subsystem", 00:31:22.496 "trtype": "$TEST_TRANSPORT", 00:31:22.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.496 "adrfam": "ipv4", 00:31:22.496 "trsvcid": "$NVMF_PORT", 00:31:22.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.496 "hdgst": ${hdgst:-false}, 00:31:22.496 "ddgst": ${ddgst:-false} 00:31:22.496 }, 00:31:22.496 "method": "bdev_nvme_attach_controller" 00:31:22.496 } 00:31:22.496 EOF 00:31:22.496 )") 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.496 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.496 { 00:31:22.496 "params": { 00:31:22.496 "name": "Nvme$subsystem", 00:31:22.496 "trtype": "$TEST_TRANSPORT", 00:31:22.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.497 "adrfam": "ipv4", 00:31:22.497 "trsvcid": "$NVMF_PORT", 00:31:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.497 "hdgst": ${hdgst:-false}, 00:31:22.497 "ddgst": ${ddgst:-false} 00:31:22.497 }, 00:31:22.497 "method": "bdev_nvme_attach_controller" 00:31:22.497 } 00:31:22.497 EOF 00:31:22.497 )") 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:22.497 "params": { 00:31:22.497 "name": "Nvme0", 00:31:22.497 "trtype": "tcp", 00:31:22.497 "traddr": "10.0.0.2", 00:31:22.497 "adrfam": "ipv4", 00:31:22.497 "trsvcid": "4420", 00:31:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.497 "hdgst": false, 00:31:22.497 "ddgst": false 00:31:22.497 }, 00:31:22.497 "method": "bdev_nvme_attach_controller" 00:31:22.497 },{ 00:31:22.497 "params": { 00:31:22.497 "name": "Nvme1", 00:31:22.497 "trtype": "tcp", 00:31:22.497 "traddr": "10.0.0.2", 00:31:22.497 "adrfam": "ipv4", 00:31:22.497 "trsvcid": "4420", 00:31:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.497 "hdgst": false, 00:31:22.497 "ddgst": false 00:31:22.497 }, 00:31:22.497 "method": "bdev_nvme_attach_controller" 00:31:22.497 }' 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:22.497 12:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.497 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:22.497 ... 00:31:22.497 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:22.497 ... 00:31:22.497 fio-3.35 00:31:22.497 Starting 4 threads 00:31:22.497 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.776 00:31:27.776 filename0: (groupid=0, jobs=1): err= 0: pid=879224: Mon Jun 10 12:35:32 2024 00:31:27.776 read: IOPS=2196, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5002msec) 00:31:27.776 slat (nsec): min=5629, max=42812, avg=7578.49, stdev=2602.61 00:31:27.776 clat (usec): min=1285, max=7576, avg=3621.44, stdev=728.28 00:31:27.776 lat (usec): min=1294, max=7582, avg=3629.01, stdev=728.22 00:31:27.776 clat percentiles (usec): 00:31:27.776 | 1.00th=[ 2057], 5.00th=[ 2573], 10.00th=[ 2835], 20.00th=[ 3163], 00:31:27.776 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3621], 00:31:27.776 | 70.00th=[ 3752], 80.00th=[ 3949], 90.00th=[ 4817], 95.00th=[ 5211], 00:31:27.776 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6259], 00:31:27.776 | 99.99th=[ 7570] 00:31:27.776 bw ( KiB/s): min=17040, max=18768, per=26.04%, avg=17630.22, stdev=615.61, samples=9 00:31:27.777 iops : min= 2130, max= 2346, avg=2203.78, stdev=76.95, samples=9 00:31:27.777 lat (msec) : 2=0.56%, 4=80.12%, 10=19.31% 00:31:27.777 cpu : usr=97.14%, sys=2.58%, ctx=6, majf=0, minf=0 00:31:27.777 IO depths : 1=0.2%, 2=1.1%, 4=70.8%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 issued rwts: total=10987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.777 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:27.777 filename0: (groupid=0, jobs=1): err= 0: pid=879225: Mon Jun 10 12:35:32 2024 00:31:27.777 read: IOPS=2073, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5001msec) 00:31:27.777 slat (nsec): min=5619, max=58260, avg=8582.36, stdev=2892.68 00:31:27.777 clat (usec): min=1434, max=48378, avg=3835.65, stdev=1435.44 00:31:27.777 lat (usec): min=1442, max=48413, avg=3844.23, stdev=1435.39 00:31:27.777 clat percentiles (usec): 00:31:27.777 | 1.00th=[ 2376], 5.00th=[ 2900], 10.00th=[ 3163], 20.00th=[ 3359], 00:31:27.777 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3720], 00:31:27.777 | 70.00th=[ 3818], 80.00th=[ 4146], 90.00th=[ 5145], 95.00th=[ 5342], 00:31:27.777 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6980], 99.95th=[48497], 00:31:27.777 | 99.99th=[48497] 00:31:27.777 bw ( KiB/s): min=15198, max=18464, per=24.47%, avg=16568.67, stdev=862.29, samples=9 00:31:27.777 iops : min= 1899, max= 2308, avg=2071.00, stdev=107.94, samples=9 00:31:27.777 lat (msec) : 2=0.16%, 4=74.55%, 10=25.21%, 50=0.08% 00:31:27.777 cpu : usr=97.28%, sys=2.46%, ctx=10, majf=0, minf=11 00:31:27.777 IO depths : 1=0.1%, 2=0.3%, 4=71.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 issued rwts: total=10368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.777 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:27.777 filename1: (groupid=0, jobs=1): err= 0: pid=879226: Mon Jun 10 12:35:32 2024 00:31:27.777 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5001msec) 00:31:27.777 slat (nsec): min=5622, max=39989, avg=7612.60, stdev=2670.34 00:31:27.777 clat (usec): min=1560, max=45850, avg=3859.45, stdev=1371.67 00:31:27.777 lat (usec): min=1566, max=45882, avg=3867.06, stdev=1371.80 00:31:27.777 clat percentiles (usec): 00:31:27.777 | 1.00th=[ 2376], 5.00th=[ 2900], 10.00th=[ 3163], 20.00th=[ 3359], 00:31:27.777 | 30.00th=[ 3458], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3752], 00:31:27.777 | 70.00th=[ 3949], 80.00th=[ 4293], 90.00th=[ 5145], 95.00th=[ 5276], 00:31:27.777 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[45876], 00:31:27.777 | 99.99th=[45876] 00:31:27.777 bw ( KiB/s): min=15232, max=18288, per=24.33%, avg=16474.67, stdev=800.00, samples=9 00:31:27.777 iops : min= 1904, max= 2286, avg=2059.33, stdev=100.00, samples=9 00:31:27.777 lat (msec) : 2=0.17%, 4=71.22%, 10=28.52%, 50=0.08% 00:31:27.777 cpu : usr=96.90%, sys=2.84%, ctx=8, majf=0, minf=9 00:31:27.777 IO depths : 1=0.3%, 2=1.0%, 4=70.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 issued rwts: total=10311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.777 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:27.777 filename1: (groupid=0, jobs=1): err= 0: pid=879227: Mon Jun 10 12:35:32 2024 00:31:27.777 read: IOPS=2131, BW=16.7MiB/s (17.5MB/s)(83.3MiB/5002msec) 00:31:27.777 slat (nsec): min=5622, max=42352, avg=8236.17, stdev=2599.15 00:31:27.777 clat (usec): min=1286, max=7091, avg=3729.98, stdev=732.75 00:31:27.777 lat (usec): min=1295, max=7099, avg=3738.22, stdev=732.75 00:31:27.777 clat percentiles (usec): 00:31:27.777 | 1.00th=[ 2180], 5.00th=[ 2737], 10.00th=[ 3032], 20.00th=[ 3261], 00:31:27.777 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3687], 00:31:27.777 | 70.00th=[ 3785], 80.00th=[ 4113], 90.00th=[ 5080], 95.00th=[ 5211], 00:31:27.777 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6390], 00:31:27.777 | 99.99th=[ 7111] 00:31:27.777 bw ( KiB/s): min=16640, max=18464, per=25.32%, avg=17143.33, stdev=549.90, samples=9 00:31:27.777 iops : min= 2080, max= 2308, avg=2142.89, stdev=68.72, samples=9 00:31:27.777 lat (msec) : 2=0.34%, 4=77.03%, 10=22.63% 00:31:27.777 cpu : usr=97.02%, sys=2.72%, ctx=8, majf=0, minf=0 00:31:27.777 IO depths : 1=0.2%, 2=0.6%, 4=71.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.777 issued rwts: total=10664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.777 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:27.777 00:31:27.777 Run status group 0 (all jobs): 00:31:27.777 READ: bw=66.1MiB/s (69.3MB/s), 16.1MiB/s-17.2MiB/s (16.9MB/s-18.0MB/s), io=331MiB (347MB), run=5001-5002msec 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.777 12:35:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.777 00:31:27.777 real 0m24.358s 00:31:27.777 user 5m20.395s 00:31:27.777 sys 0m3.816s 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:27.777 12:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 ************************************ 00:31:27.777 END TEST fio_dif_rand_params 00:31:27.777 ************************************ 00:31:27.777 12:35:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:27.777 12:35:33 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:27.777 12:35:33 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:27.777 12:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.777 ************************************ 00:31:27.777 START TEST fio_dif_digest 00:31:27.777 ************************************ 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:27.777 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:27.778 bdev_null0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:27.778 [2024-06-10 12:35:33.144278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:27.778 { 00:31:27.778 "params": { 00:31:27.778 "name": "Nvme$subsystem", 00:31:27.778 "trtype": "$TEST_TRANSPORT", 00:31:27.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.778 "adrfam": "ipv4", 00:31:27.778 "trsvcid": "$NVMF_PORT", 00:31:27.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.778 "hdgst": ${hdgst:-false}, 00:31:27.778 "ddgst": ${ddgst:-false} 00:31:27.778 }, 00:31:27.778 "method": "bdev_nvme_attach_controller" 00:31:27.778 } 00:31:27.778 EOF 00:31:27.778 )") 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:27.778 "params": { 00:31:27.778 "name": "Nvme0", 00:31:27.778 "trtype": "tcp", 00:31:27.778 "traddr": "10.0.0.2", 00:31:27.778 "adrfam": "ipv4", 00:31:27.778 "trsvcid": "4420", 00:31:27.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.778 "hdgst": true, 00:31:27.778 "ddgst": true 00:31:27.778 }, 00:31:27.778 "method": "bdev_nvme_attach_controller" 00:31:27.778 }' 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.778 12:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.038 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:28.038 ... 00:31:28.038 fio-3.35 00:31:28.038 Starting 3 threads 00:31:28.038 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.261 00:31:40.261 filename0: (groupid=0, jobs=1): err= 0: pid=880638: Mon Jun 10 12:35:44 2024 00:31:40.261 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(279MiB/10047msec) 00:31:40.261 slat (nsec): min=6030, max=60211, avg=8254.07, stdev=2743.83 00:31:40.261 clat (usec): min=9410, max=58307, avg=13486.28, stdev=2680.84 00:31:40.261 lat (usec): min=9416, max=58313, avg=13494.54, stdev=2680.79 00:31:40.261 clat percentiles (usec): 00:31:40.261 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:31:40.261 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:31:40.261 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14615], 95.00th=[15139], 00:31:40.261 | 99.00th=[16450], 99.50th=[17171], 99.90th=[55313], 99.95th=[57410], 00:31:40.261 | 99.99th=[58459] 00:31:40.261 bw ( KiB/s): min=26368, max=29440, per=34.38%, avg=28518.40, stdev=836.37, samples=20 00:31:40.261 iops : min= 206, max= 230, avg=222.80, stdev= 6.53, samples=20 00:31:40.261 lat (msec) : 10=0.13%, 20=99.51%, 100=0.36% 00:31:40.261 cpu : usr=94.95%, sys=4.79%, ctx=28, majf=0, minf=108 00:31:40.261 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.261 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.261 filename0: (groupid=0, jobs=1): err= 0: pid=880640: Mon Jun 10 12:35:44 2024 00:31:40.261 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(263MiB/10007msec) 00:31:40.261 slat (nsec): min=6042, max=44417, avg=7406.30, stdev=1668.12 00:31:40.261 clat (usec): min=8519, max=23440, avg=14245.06, stdev=1372.79 00:31:40.261 lat (usec): min=8527, max=23484, avg=14252.47, stdev=1372.89 00:31:40.261 clat percentiles (usec): 00:31:40.261 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12780], 20.00th=[13173], 00:31:40.261 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:31:40.261 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15926], 95.00th=[16450], 00:31:40.261 | 99.00th=[17695], 99.50th=[17957], 99.90th=[21890], 99.95th=[22152], 00:31:40.261 | 99.99th=[23462] 00:31:40.261 bw ( KiB/s): min=26112, max=28160, per=32.45%, avg=26918.40, stdev=400.70, samples=20 00:31:40.261 iops : min= 204, max= 220, avg=210.30, stdev= 3.13, samples=20 00:31:40.261 lat (msec) : 10=0.66%, 20=99.10%, 50=0.24% 00:31:40.261 cpu : usr=95.36%, sys=4.39%, ctx=30, majf=0, minf=187 00:31:40.261 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.261 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.261 filename0: (groupid=0, jobs=1): err= 0: pid=880641: Mon Jun 10 12:35:44 2024 00:31:40.261 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10048msec) 00:31:40.261 slat (nsec): min=6045, max=33653, avg=7434.22, stdev=1744.35 00:31:40.261 clat (usec): min=8069, max=54359, avg=13831.37, stdev=1706.65 00:31:40.261 lat (usec): min=8076, max=54365, avg=13838.81, stdev=1706.59 00:31:40.261 clat percentiles (usec): 00:31:40.261 | 1.00th=[ 9896], 5.00th=[11863], 10.00th=[12387], 20.00th=[12911], 00:31:40.261 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:31:40.261 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:31:40.261 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[50594], 00:31:40.261 | 99.99th=[54264] 00:31:40.261 bw ( KiB/s): min=26624, max=28672, per=33.52%, avg=27801.60, stdev=540.83, samples=20 00:31:40.261 iops : min= 208, max= 224, avg=217.20, stdev= 4.23, samples=20 00:31:40.261 lat (msec) : 10=1.01%, 20=98.90%, 100=0.09% 00:31:40.261 cpu : usr=95.01%, sys=4.74%, ctx=25, majf=0, minf=151 00:31:40.261 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.261 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.261 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.261 00:31:40.261 Run status group 0 (all jobs): 00:31:40.261 READ: bw=81.0MiB/s (84.9MB/s), 26.3MiB/s-27.7MiB/s (27.6MB/s-29.1MB/s), io=814MiB (853MB), run=10007-10048msec 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:40.261 00:31:40.261 real 0m11.139s 00:31:40.261 user 0m41.068s 00:31:40.261 sys 0m1.729s 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:40.261 12:35:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 ************************************ 00:31:40.261 END TEST fio_dif_digest 00:31:40.261 ************************************ 00:31:40.261 12:35:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:40.261 12:35:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:40.261 rmmod nvme_tcp 00:31:40.261 rmmod nvme_fabrics 00:31:40.261 rmmod nvme_keyring 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 870277 ']' 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 870277 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 870277 ']' 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 870277 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 870277 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 870277' 00:31:40.261 killing process with pid 870277 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@968 -- # kill 870277 00:31:40.261 12:35:44 nvmf_dif -- common/autotest_common.sh@973 -- # wait 870277 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:40.261 12:35:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:42.803 Waiting for block devices as requested 00:31:42.803 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:42.803 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:42.803 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:43.063 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:43.063 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:43.063 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:43.322 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:43.322 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:43.322 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:43.582 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:43.582 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:43.582 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:43.841 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:43.841 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:43.841 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:43.841 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:44.101 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:44.101 12:35:49 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.101 12:35:49 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.101 12:35:49 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.101 12:35:49 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.101 12:35:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.101 12:35:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:44.101 12:35:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.010 12:35:51 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.010 00:31:46.010 real 1m19.080s 00:31:46.010 user 8m5.495s 00:31:46.010 sys 0m20.919s 00:31:46.010 12:35:51 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:46.010 12:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:46.010 ************************************ 00:31:46.011 END TEST nvmf_dif 00:31:46.011 ************************************ 00:31:46.011 12:35:51 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.011 12:35:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:46.011 12:35:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:46.011 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:31:46.272 ************************************ 00:31:46.272 START TEST nvmf_abort_qd_sizes 00:31:46.272 ************************************ 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.272 * Looking for test storage... 00:31:46.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.272 12:35:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:54.408 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:54.408 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:54.408 Found net devices under 0000:31:00.0: cvl_0_0 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:54.408 Found net devices under 0000:31:00.1: cvl_0_1 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:31:54.408 00:31:54.408 --- 10.0.0.2 ping statistics --- 00:31:54.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.408 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:31:54.408 00:31:54.408 --- 10.0.0.1 ping statistics --- 00:31:54.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.408 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:54.408 12:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.681 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:58.681 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=890876 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 890876 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 890876 ']' 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:58.681 12:36:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.681 [2024-06-10 12:36:03.924160] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:31:58.681 [2024-06-10 12:36:03.924230] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.681 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.681 [2024-06-10 12:36:04.002614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:58.681 [2024-06-10 12:36:04.079422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.681 [2024-06-10 12:36:04.079458] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.681 [2024-06-10 12:36:04.079466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.681 [2024-06-10 12:36:04.079473] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.681 [2024-06-10 12:36:04.079478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.681 [2024-06-10 12:36:04.079616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.681 [2024-06-10 12:36:04.079732] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.681 [2024-06-10 12:36:04.079889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.681 [2024-06-10 12:36:04.079890] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:59.252 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:59.253 12:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:59.253 ************************************ 00:31:59.253 START TEST spdk_target_abort 00:31:59.253 ************************************ 00:31:59.253 12:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:31:59.253 12:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:59.253 12:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:59.253 12:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.253 12:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.513 spdk_targetn1 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.513 [2024-06-10 12:36:05.078123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.513 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.513 [2024-06-10 12:36:05.115432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:59.773 12:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.773 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.773 [2024-06-10 12:36:05.282712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:408 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:59.773 [2024-06-10 12:36:05.282739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:31:59.773 [2024-06-10 12:36:05.320108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2216 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:59.773 [2024-06-10 12:36:05.320127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.773 [2024-06-10 12:36:05.333058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2720 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:59.773 [2024-06-10 12:36:05.333079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:03.072 Initializing NVMe Controllers 00:32:03.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.072 Initialization complete. Launching workers. 00:32:03.072 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15645, failed: 3 00:32:03.072 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3263, failed to submit 12385 00:32:03.072 success 665, unsuccess 2598, failed 0 00:32:03.072 12:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.072 12:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:03.072 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.072 [2024-06-10 12:36:08.635347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3056 len:8 PRP1 0x200007c44000 PRP2 0x0 00:32:03.072 [2024-06-10 12:36:08.635391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0089 p:0 m:0 dnr:0 00:32:04.460 [2024-06-10 12:36:09.739527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:28744 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:32:04.460 [2024-06-10 12:36:09.739565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:32:04.460 [2024-06-10 12:36:10.019409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:34744 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:32:04.460 [2024-06-10 12:36:10.019439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00fd p:1 m:0 dnr:0 00:32:05.401 [2024-06-10 12:36:10.847403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:54192 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:32:05.401 [2024-06-10 12:36:10.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:32:06.342 Initializing NVMe Controllers 00:32:06.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:06.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:06.342 Initialization complete. Launching workers. 00:32:06.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8647, failed: 4 00:32:06.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7414 00:32:06.342 success 370, unsuccess 867, failed 0 00:32:06.342 12:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:06.342 12:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:06.342 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.639 Initializing NVMe Controllers 00:32:09.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:09.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:09.639 Initialization complete. Launching workers. 00:32:09.639 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42512, failed: 0 00:32:09.639 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2618, failed to submit 39894 00:32:09.639 success 585, unsuccess 2033, failed 0 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.639 12:36:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 890876 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 890876 ']' 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 890876 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 890876 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 890876' 00:32:11.557 killing process with pid 890876 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 890876 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 890876 00:32:11.557 00:32:11.557 real 0m12.080s 00:32:11.557 user 0m48.982s 00:32:11.557 sys 0m1.796s 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.557 ************************************ 00:32:11.557 END TEST spdk_target_abort 00:32:11.557 ************************************ 00:32:11.557 12:36:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:11.557 12:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:11.557 12:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:11.557 12:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:11.557 ************************************ 00:32:11.557 START TEST kernel_target_abort 00:32:11.557 ************************************ 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:11.557 12:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.764 Waiting for block devices as requested 00:32:15.764 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.764 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:16.025 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:16.025 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:16.025 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:16.284 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:16.284 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:16.284 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:16.284 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:16.545 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:16.545 12:36:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:16.545 No valid GPT data, bailing 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:32:16.545 00:32:16.545 Discovery Log Number of Records 2, Generation counter 2 00:32:16.545 =====Discovery Log Entry 0====== 00:32:16.545 trtype: tcp 00:32:16.545 adrfam: ipv4 00:32:16.545 subtype: current discovery subsystem 00:32:16.545 treq: not specified, sq flow control disable supported 00:32:16.545 portid: 1 00:32:16.545 trsvcid: 4420 00:32:16.545 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:16.545 traddr: 10.0.0.1 00:32:16.545 eflags: none 00:32:16.545 sectype: none 00:32:16.545 =====Discovery Log Entry 1====== 00:32:16.545 trtype: tcp 00:32:16.545 adrfam: ipv4 00:32:16.545 subtype: nvme subsystem 00:32:16.545 treq: not specified, sq flow control disable supported 00:32:16.545 portid: 1 00:32:16.545 trsvcid: 4420 00:32:16.545 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:16.545 traddr: 10.0.0.1 00:32:16.545 eflags: none 00:32:16.545 sectype: none 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:16.545 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.546 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.546 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.546 12:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.546 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.838 Initializing NVMe Controllers 00:32:19.838 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.838 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:19.838 Initialization complete. Launching workers. 00:32:19.838 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64094, failed: 0 00:32:19.838 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 64094, failed to submit 0 00:32:19.838 success 0, unsuccess 64094, failed 0 00:32:19.838 12:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:19.838 12:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.838 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.137 Initializing NVMe Controllers 00:32:23.137 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:23.137 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:23.137 Initialization complete. Launching workers. 00:32:23.137 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105616, failed: 0 00:32:23.137 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26602, failed to submit 79014 00:32:23.137 success 0, unsuccess 26602, failed 0 00:32:23.137 12:36:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:23.137 12:36:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:23.137 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.719 Initializing NVMe Controllers 00:32:25.719 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.719 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:25.719 Initialization complete. Launching workers. 00:32:25.719 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100918, failed: 0 00:32:25.719 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25218, failed to submit 75700 00:32:25.719 success 0, unsuccess 25218, failed 0 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:25.719 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:25.980 12:36:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:30.187 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:30.187 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:31.570 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:31.570 00:32:31.570 real 0m20.057s 00:32:31.570 user 0m9.598s 00:32:31.570 sys 0m6.096s 00:32:31.570 12:36:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:31.570 12:36:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.570 ************************************ 00:32:31.570 END TEST kernel_target_abort 00:32:31.570 ************************************ 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:31.570 rmmod nvme_tcp 00:32:31.570 rmmod nvme_fabrics 00:32:31.570 rmmod nvme_keyring 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 890876 ']' 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 890876 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 890876 ']' 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 890876 00:32:31.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (890876) - No such process 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 890876 is not found' 00:32:31.570 Process with pid 890876 is not found 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:31.570 12:36:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.771 Waiting for block devices as requested 00:32:35.771 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:35.771 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:36.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:36.030 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:36.030 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:36.290 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:36.290 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:36.290 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:36.550 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:36.550 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:36.550 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:36.550 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:36.550 12:36:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.106 12:36:44 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:39.106 00:32:39.106 real 0m52.541s 00:32:39.106 user 1m4.211s 00:32:39.106 sys 0m19.311s 00:32:39.106 12:36:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:39.106 12:36:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:39.106 ************************************ 00:32:39.106 END TEST nvmf_abort_qd_sizes 00:32:39.106 ************************************ 00:32:39.106 12:36:44 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:39.106 12:36:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:39.106 12:36:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:39.106 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:32:39.106 ************************************ 00:32:39.106 START TEST keyring_file 00:32:39.106 ************************************ 00:32:39.106 12:36:44 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:39.106 * Looking for test storage... 00:32:39.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.106 12:36:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.106 12:36:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.106 12:36:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.106 12:36:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.106 12:36:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.106 12:36:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.106 12:36:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:39.106 12:36:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:39.106 12:36:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.54c72E53iM 00:32:39.106 12:36:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.106 12:36:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.54c72E53iM 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.54c72E53iM 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.54c72E53iM 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.D45k4b7skW 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:39.107 12:36:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.D45k4b7skW 00:32:39.107 12:36:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.D45k4b7skW 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.D45k4b7skW 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=902046 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 902046 00:32:39.107 12:36:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 902046 ']' 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:39.107 12:36:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:39.107 [2024-06-10 12:36:44.586942] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:32:39.107 [2024-06-10 12:36:44.587014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902046 ] 00:32:39.107 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.107 [2024-06-10 12:36:44.660917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.367 [2024-06-10 12:36:44.736320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:39.937 12:36:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:39.937 [2024-06-10 12:36:45.357669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.937 null0 00:32:39.937 [2024-06-10 12:36:45.389713] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:39.937 [2024-06-10 12:36:45.390005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:39.937 [2024-06-10 12:36:45.397730] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:39.937 12:36:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:39.937 [2024-06-10 12:36:45.409763] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:39.937 request: 00:32:39.937 { 00:32:39.937 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.937 "secure_channel": false, 00:32:39.937 "listen_address": { 00:32:39.937 "trtype": "tcp", 00:32:39.937 "traddr": "127.0.0.1", 00:32:39.937 "trsvcid": "4420" 00:32:39.937 }, 00:32:39.937 "method": "nvmf_subsystem_add_listener", 00:32:39.937 "req_id": 1 00:32:39.937 } 00:32:39.937 Got JSON-RPC error response 00:32:39.937 response: 00:32:39.937 { 00:32:39.937 "code": -32602, 00:32:39.937 "message": "Invalid parameters" 00:32:39.937 } 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:39.937 12:36:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=902228 00:32:39.937 12:36:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 902228 /var/tmp/bperf.sock 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 902228 ']' 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:39.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:39.937 12:36:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:39.937 12:36:45 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:39.937 [2024-06-10 12:36:45.470183] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:32:39.937 [2024-06-10 12:36:45.470274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902228 ] 00:32:39.937 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.198 [2024-06-10 12:36:45.553229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.198 [2024-06-10 12:36:45.616877] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.769 12:36:46 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:40.769 12:36:46 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:40.769 12:36:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:40.769 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:40.769 12:36:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.D45k4b7skW 00:32:40.769 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.D45k4b7skW 00:32:41.029 12:36:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:41.029 12:36:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:41.029 12:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.029 12:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:41.029 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.289 12:36:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.54c72E53iM == \/\t\m\p\/\t\m\p\.\5\4\c\7\2\E\5\3\i\M ]] 00:32:41.289 12:36:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:41.289 12:36:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.289 12:36:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.D45k4b7skW == \/\t\m\p\/\t\m\p\.\D\4\5\k\4\b\7\s\k\W ]] 00:32:41.289 12:36:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.289 12:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:41.549 12:36:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:41.549 12:36:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:41.549 12:36:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:41.549 12:36:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:41.549 12:36:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.549 12:36:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.549 12:36:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:41.549 12:36:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:41.549 12:36:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.809 [2024-06-10 12:36:47.293284] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:41.809 nvme0n1 00:32:41.809 12:36:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.809 12:36:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.069 12:36:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:42.069 12:36:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:42.069 12:36:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:42.069 12:36:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.069 12:36:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.069 12:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.069 12:36:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:42.329 12:36:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:42.329 12:36:47 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.329 Running I/O for 1 seconds... 00:32:43.269 00:32:43.269 Latency(us) 00:32:43.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.269 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:43.269 nvme0n1 : 1.01 13043.85 50.95 0.00 0.00 9783.42 3686.40 15291.73 00:32:43.269 =================================================================================================================== 00:32:43.269 Total : 13043.85 50.95 0.00 0.00 9783.42 3686.40 15291.73 00:32:43.269 0 00:32:43.269 12:36:48 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:43.269 12:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:43.529 12:36:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:43.529 12:36:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:43.529 12:36:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:43.529 12:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.529 12:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.529 12:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:43.529 12:36:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:43.529 12:36:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:43.529 12:36:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:43.529 12:36:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:43.529 12:36:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.529 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.529 12:36:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:43.789 12:36:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:43.789 12:36:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:43.789 12:36:49 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:43.789 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:44.050 [2024-06-10 12:36:49.433490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:44.050 [2024-06-10 12:36:49.433992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaa4a0 (107): Transport endpoint is not connected 00:32:44.050 [2024-06-10 12:36:49.434988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaa4a0 (9): Bad file descriptor 00:32:44.050 [2024-06-10 12:36:49.435990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:44.050 [2024-06-10 12:36:49.435998] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:44.050 [2024-06-10 12:36:49.436004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:44.050 request: 00:32:44.050 { 00:32:44.050 "name": "nvme0", 00:32:44.050 "trtype": "tcp", 00:32:44.050 "traddr": "127.0.0.1", 00:32:44.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.050 "adrfam": "ipv4", 00:32:44.050 "trsvcid": "4420", 00:32:44.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.050 "psk": "key1", 00:32:44.050 "method": "bdev_nvme_attach_controller", 00:32:44.050 "req_id": 1 00:32:44.050 } 00:32:44.050 Got JSON-RPC error response 00:32:44.050 response: 00:32:44.050 { 00:32:44.050 "code": -5, 00:32:44.050 "message": "Input/output error" 00:32:44.050 } 00:32:44.050 12:36:49 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:44.050 12:36:49 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:44.050 12:36:49 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:44.050 12:36:49 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:44.050 12:36:49 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.050 12:36:49 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:44.050 12:36:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:44.050 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.311 12:36:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:44.311 12:36:49 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:44.311 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:44.572 12:36:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:44.572 12:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:44.572 12:36:50 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:44.572 12:36:50 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:44.572 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.832 12:36:50 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:44.832 12:36:50 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:44.832 [2024-06-10 12:36:50.389790] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.54c72E53iM': 0100660 00:32:44.832 [2024-06-10 12:36:50.389811] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:44.832 request: 00:32:44.832 { 00:32:44.832 "name": "key0", 00:32:44.832 "path": "/tmp/tmp.54c72E53iM", 00:32:44.832 "method": "keyring_file_add_key", 00:32:44.832 "req_id": 1 00:32:44.832 } 00:32:44.832 Got JSON-RPC error response 00:32:44.832 response: 00:32:44.832 { 00:32:44.832 "code": -1, 00:32:44.832 "message": "Operation not permitted" 00:32:44.832 } 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:44.832 12:36:50 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:44.832 12:36:50 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:44.832 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.54c72E53iM 00:32:45.093 12:36:50 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.54c72E53iM 00:32:45.093 12:36:50 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:45.093 12:36:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:45.093 12:36:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:45.093 12:36:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.093 12:36:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:45.093 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.353 12:36:50 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:45.353 12:36:50 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:45.353 12:36:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:45.354 12:36:50 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.354 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.354 [2024-06-10 12:36:50.891051] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.54c72E53iM': No such file or directory 00:32:45.354 [2024-06-10 12:36:50.891067] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:45.354 [2024-06-10 12:36:50.891084] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:45.354 [2024-06-10 12:36:50.891088] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:45.354 [2024-06-10 12:36:50.891093] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:45.354 request: 00:32:45.354 { 00:32:45.354 "name": "nvme0", 00:32:45.354 "trtype": "tcp", 00:32:45.354 "traddr": "127.0.0.1", 00:32:45.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.354 "adrfam": "ipv4", 00:32:45.354 "trsvcid": "4420", 00:32:45.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.354 "psk": "key0", 00:32:45.354 "method": "bdev_nvme_attach_controller", 00:32:45.354 "req_id": 1 00:32:45.354 } 00:32:45.354 Got JSON-RPC error response 00:32:45.354 response: 00:32:45.354 { 00:32:45.354 "code": -19, 00:32:45.354 "message": "No such device" 00:32:45.354 } 00:32:45.354 12:36:50 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:45.354 12:36:50 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:45.354 12:36:50 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:45.354 12:36:50 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:45.354 12:36:50 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:45.354 12:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:45.613 12:36:51 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:45.613 12:36:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VYljM6Ra9X 00:32:45.614 12:36:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:45.614 12:36:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:45.614 12:36:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VYljM6Ra9X 00:32:45.614 12:36:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VYljM6Ra9X 00:32:45.614 12:36:51 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.VYljM6Ra9X 00:32:45.614 12:36:51 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VYljM6Ra9X 00:32:45.614 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VYljM6Ra9X 00:32:45.873 12:36:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.873 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.873 nvme0n1 00:32:46.135 12:36:51 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.135 12:36:51 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:46.135 12:36:51 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:46.135 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:46.397 12:36:51 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:46.397 12:36:51 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.397 12:36:51 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:46.397 12:36:51 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.397 12:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:46.659 12:36:52 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:46.659 12:36:52 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:46.659 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:46.943 12:36:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:46.943 12:36:52 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:46.943 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.943 12:36:52 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:46.943 12:36:52 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VYljM6Ra9X 00:32:46.943 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VYljM6Ra9X 00:32:47.248 12:36:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.D45k4b7skW 00:32:47.248 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.D45k4b7skW 00:32:47.248 12:36:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:47.248 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:47.508 nvme0n1 00:32:47.508 12:36:52 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:47.508 12:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:47.769 12:36:53 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:47.769 "subsystems": [ 00:32:47.769 { 00:32:47.769 "subsystem": "keyring", 00:32:47.769 "config": [ 00:32:47.769 { 00:32:47.769 "method": "keyring_file_add_key", 00:32:47.769 "params": { 00:32:47.769 "name": "key0", 00:32:47.769 "path": "/tmp/tmp.VYljM6Ra9X" 00:32:47.769 } 00:32:47.769 }, 00:32:47.769 { 00:32:47.769 "method": "keyring_file_add_key", 00:32:47.769 "params": { 00:32:47.769 "name": "key1", 00:32:47.769 "path": "/tmp/tmp.D45k4b7skW" 00:32:47.769 } 00:32:47.769 } 00:32:47.769 ] 00:32:47.769 }, 00:32:47.769 { 00:32:47.769 "subsystem": "iobuf", 00:32:47.769 "config": [ 00:32:47.769 { 00:32:47.769 "method": "iobuf_set_options", 00:32:47.769 "params": { 00:32:47.769 "small_pool_count": 8192, 00:32:47.769 "large_pool_count": 1024, 00:32:47.769 "small_bufsize": 8192, 00:32:47.769 "large_bufsize": 135168 00:32:47.769 } 00:32:47.769 } 00:32:47.769 ] 00:32:47.769 }, 00:32:47.769 { 00:32:47.769 "subsystem": "sock", 00:32:47.769 "config": [ 00:32:47.769 { 00:32:47.769 "method": "sock_set_default_impl", 00:32:47.769 "params": { 00:32:47.769 "impl_name": "posix" 00:32:47.769 } 00:32:47.769 }, 00:32:47.769 { 00:32:47.769 "method": "sock_impl_set_options", 00:32:47.769 "params": { 00:32:47.769 "impl_name": "ssl", 00:32:47.769 "recv_buf_size": 4096, 00:32:47.769 "send_buf_size": 4096, 00:32:47.769 "enable_recv_pipe": true, 00:32:47.769 "enable_quickack": false, 00:32:47.769 "enable_placement_id": 0, 00:32:47.769 "enable_zerocopy_send_server": true, 00:32:47.769 "enable_zerocopy_send_client": false, 00:32:47.769 "zerocopy_threshold": 0, 00:32:47.770 "tls_version": 0, 00:32:47.770 "enable_ktls": false 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "sock_impl_set_options", 00:32:47.770 "params": { 00:32:47.770 "impl_name": "posix", 00:32:47.770 "recv_buf_size": 2097152, 00:32:47.770 "send_buf_size": 2097152, 00:32:47.770 "enable_recv_pipe": true, 00:32:47.770 "enable_quickack": false, 00:32:47.770 "enable_placement_id": 0, 00:32:47.770 "enable_zerocopy_send_server": true, 00:32:47.770 "enable_zerocopy_send_client": false, 00:32:47.770 "zerocopy_threshold": 0, 00:32:47.770 "tls_version": 0, 00:32:47.770 "enable_ktls": false 00:32:47.770 } 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "vmd", 00:32:47.770 "config": [] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "accel", 00:32:47.770 "config": [ 00:32:47.770 { 00:32:47.770 "method": "accel_set_options", 00:32:47.770 "params": { 00:32:47.770 "small_cache_size": 128, 00:32:47.770 "large_cache_size": 16, 00:32:47.770 "task_count": 2048, 00:32:47.770 "sequence_count": 2048, 00:32:47.770 "buf_count": 2048 00:32:47.770 } 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "bdev", 00:32:47.770 "config": [ 00:32:47.770 { 00:32:47.770 "method": "bdev_set_options", 00:32:47.770 "params": { 00:32:47.770 "bdev_io_pool_size": 65535, 00:32:47.770 "bdev_io_cache_size": 256, 00:32:47.770 "bdev_auto_examine": true, 00:32:47.770 "iobuf_small_cache_size": 128, 00:32:47.770 "iobuf_large_cache_size": 16 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_raid_set_options", 00:32:47.770 "params": { 00:32:47.770 "process_window_size_kb": 1024 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_iscsi_set_options", 00:32:47.770 "params": { 00:32:47.770 "timeout_sec": 30 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_nvme_set_options", 00:32:47.770 "params": { 00:32:47.770 "action_on_timeout": "none", 00:32:47.770 "timeout_us": 0, 00:32:47.770 "timeout_admin_us": 0, 00:32:47.770 "keep_alive_timeout_ms": 10000, 00:32:47.770 "arbitration_burst": 0, 00:32:47.770 "low_priority_weight": 0, 00:32:47.770 "medium_priority_weight": 0, 00:32:47.770 "high_priority_weight": 0, 00:32:47.770 "nvme_adminq_poll_period_us": 10000, 00:32:47.770 "nvme_ioq_poll_period_us": 0, 00:32:47.770 "io_queue_requests": 512, 00:32:47.770 "delay_cmd_submit": true, 00:32:47.770 "transport_retry_count": 4, 00:32:47.770 "bdev_retry_count": 3, 00:32:47.770 "transport_ack_timeout": 0, 00:32:47.770 "ctrlr_loss_timeout_sec": 0, 00:32:47.770 "reconnect_delay_sec": 0, 00:32:47.770 "fast_io_fail_timeout_sec": 0, 00:32:47.770 "disable_auto_failback": false, 00:32:47.770 "generate_uuids": false, 00:32:47.770 "transport_tos": 0, 00:32:47.770 "nvme_error_stat": false, 00:32:47.770 "rdma_srq_size": 0, 00:32:47.770 "io_path_stat": false, 00:32:47.770 "allow_accel_sequence": false, 00:32:47.770 "rdma_max_cq_size": 0, 00:32:47.770 "rdma_cm_event_timeout_ms": 0, 00:32:47.770 "dhchap_digests": [ 00:32:47.770 "sha256", 00:32:47.770 "sha384", 00:32:47.770 "sha512" 00:32:47.770 ], 00:32:47.770 "dhchap_dhgroups": [ 00:32:47.770 "null", 00:32:47.770 "ffdhe2048", 00:32:47.770 "ffdhe3072", 00:32:47.770 "ffdhe4096", 00:32:47.770 "ffdhe6144", 00:32:47.770 "ffdhe8192" 00:32:47.770 ] 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_nvme_attach_controller", 00:32:47.770 "params": { 00:32:47.770 "name": "nvme0", 00:32:47.770 "trtype": "TCP", 00:32:47.770 "adrfam": "IPv4", 00:32:47.770 "traddr": "127.0.0.1", 00:32:47.770 "trsvcid": "4420", 00:32:47.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.770 "prchk_reftag": false, 00:32:47.770 "prchk_guard": false, 00:32:47.770 "ctrlr_loss_timeout_sec": 0, 00:32:47.770 "reconnect_delay_sec": 0, 00:32:47.770 "fast_io_fail_timeout_sec": 0, 00:32:47.770 "psk": "key0", 00:32:47.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.770 "hdgst": false, 00:32:47.770 "ddgst": false 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_nvme_set_hotplug", 00:32:47.770 "params": { 00:32:47.770 "period_us": 100000, 00:32:47.770 "enable": false 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "bdev_wait_for_examine" 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "nbd", 00:32:47.770 "config": [] 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }' 00:32:47.770 12:36:53 keyring_file -- keyring/file.sh@114 -- # killprocess 902228 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 902228 ']' 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@953 -- # kill -0 902228 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 902228 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 902228' 00:32:47.770 killing process with pid 902228 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@968 -- # kill 902228 00:32:47.770 Received shutdown signal, test time was about 1.000000 seconds 00:32:47.770 00:32:47.770 Latency(us) 00:32:47.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.770 =================================================================================================================== 00:32:47.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@973 -- # wait 902228 00:32:47.770 12:36:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=903807 00:32:47.770 12:36:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 903807 /var/tmp/bperf.sock 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 903807 ']' 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:47.770 12:36:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:47.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:47.770 12:36:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:47.770 12:36:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:47.770 "subsystems": [ 00:32:47.770 { 00:32:47.770 "subsystem": "keyring", 00:32:47.770 "config": [ 00:32:47.770 { 00:32:47.770 "method": "keyring_file_add_key", 00:32:47.770 "params": { 00:32:47.770 "name": "key0", 00:32:47.770 "path": "/tmp/tmp.VYljM6Ra9X" 00:32:47.770 } 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "method": "keyring_file_add_key", 00:32:47.770 "params": { 00:32:47.770 "name": "key1", 00:32:47.770 "path": "/tmp/tmp.D45k4b7skW" 00:32:47.770 } 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "iobuf", 00:32:47.770 "config": [ 00:32:47.770 { 00:32:47.770 "method": "iobuf_set_options", 00:32:47.770 "params": { 00:32:47.770 "small_pool_count": 8192, 00:32:47.770 "large_pool_count": 1024, 00:32:47.770 "small_bufsize": 8192, 00:32:47.770 "large_bufsize": 135168 00:32:47.770 } 00:32:47.770 } 00:32:47.770 ] 00:32:47.770 }, 00:32:47.770 { 00:32:47.770 "subsystem": "sock", 00:32:47.770 "config": [ 00:32:47.770 { 00:32:47.770 "method": "sock_set_default_impl", 00:32:47.770 "params": { 00:32:47.770 "impl_name": "posix" 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "sock_impl_set_options", 00:32:47.771 "params": { 00:32:47.771 "impl_name": "ssl", 00:32:47.771 "recv_buf_size": 4096, 00:32:47.771 "send_buf_size": 4096, 00:32:47.771 "enable_recv_pipe": true, 00:32:47.771 "enable_quickack": false, 00:32:47.771 "enable_placement_id": 0, 00:32:47.771 "enable_zerocopy_send_server": true, 00:32:47.771 "enable_zerocopy_send_client": false, 00:32:47.771 "zerocopy_threshold": 0, 00:32:47.771 "tls_version": 0, 00:32:47.771 "enable_ktls": false 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "sock_impl_set_options", 00:32:47.771 "params": { 00:32:47.771 "impl_name": "posix", 00:32:47.771 "recv_buf_size": 2097152, 00:32:47.771 "send_buf_size": 2097152, 00:32:47.771 "enable_recv_pipe": true, 00:32:47.771 "enable_quickack": false, 00:32:47.771 "enable_placement_id": 0, 00:32:47.771 "enable_zerocopy_send_server": true, 00:32:47.771 "enable_zerocopy_send_client": false, 00:32:47.771 "zerocopy_threshold": 0, 00:32:47.771 "tls_version": 0, 00:32:47.771 "enable_ktls": false 00:32:47.771 } 00:32:47.771 } 00:32:47.771 ] 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "subsystem": "vmd", 00:32:47.771 "config": [] 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "subsystem": "accel", 00:32:47.771 "config": [ 00:32:47.771 { 00:32:47.771 "method": "accel_set_options", 00:32:47.771 "params": { 00:32:47.771 "small_cache_size": 128, 00:32:47.771 "large_cache_size": 16, 00:32:47.771 "task_count": 2048, 00:32:47.771 "sequence_count": 2048, 00:32:47.771 "buf_count": 2048 00:32:47.771 } 00:32:47.771 } 00:32:47.771 ] 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "subsystem": "bdev", 00:32:47.771 "config": [ 00:32:47.771 { 00:32:47.771 "method": "bdev_set_options", 00:32:47.771 "params": { 00:32:47.771 "bdev_io_pool_size": 65535, 00:32:47.771 "bdev_io_cache_size": 256, 00:32:47.771 "bdev_auto_examine": true, 00:32:47.771 "iobuf_small_cache_size": 128, 00:32:47.771 "iobuf_large_cache_size": 16 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_raid_set_options", 00:32:47.771 "params": { 00:32:47.771 "process_window_size_kb": 1024 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_iscsi_set_options", 00:32:47.771 "params": { 00:32:47.771 "timeout_sec": 30 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_nvme_set_options", 00:32:47.771 "params": { 00:32:47.771 "action_on_timeout": "none", 00:32:47.771 "timeout_us": 0, 00:32:47.771 "timeout_admin_us": 0, 00:32:47.771 "keep_alive_timeout_ms": 10000, 00:32:47.771 "arbitration_burst": 0, 00:32:47.771 "low_priority_weight": 0, 00:32:47.771 "medium_priority_weight": 0, 00:32:47.771 "high_priority_weight": 0, 00:32:47.771 "nvme_adminq_poll_period_us": 10000, 00:32:47.771 "nvme_ioq_poll_period_us": 0, 00:32:47.771 "io_queue_requests": 512, 00:32:47.771 "delay_cmd_submit": true, 00:32:47.771 "transport_retry_count": 4, 00:32:47.771 "bdev_retry_count": 3, 00:32:47.771 "transport_ack_timeout": 0, 00:32:47.771 "ctrlr_loss_timeout_sec": 0, 00:32:47.771 "reconnect_delay_sec": 0, 00:32:47.771 "fast_io_fail_timeout_sec": 0, 00:32:47.771 "disable_auto_failback": false, 00:32:47.771 "generate_uuids": false, 00:32:47.771 "transport_tos": 0, 00:32:47.771 "nvme_error_stat": false, 00:32:47.771 "rdma_srq_size": 0, 00:32:47.771 "io_path_stat": false, 00:32:47.771 "allow_accel_sequence": false, 00:32:47.771 "rdma_max_cq_size": 0, 00:32:47.771 "rdma_cm_event_timeout_ms": 0, 00:32:47.771 "dhchap_digests": [ 00:32:47.771 "sha256", 00:32:47.771 "sha384", 00:32:47.771 "sha512" 00:32:47.771 ], 00:32:47.771 "dhchap_dhgroups": [ 00:32:47.771 "null", 00:32:47.771 "ffdhe2048", 00:32:47.771 "ffdhe3072", 00:32:47.771 "ffdhe4096", 00:32:47.771 "ffdhe6144", 00:32:47.771 "ffdhe8192" 00:32:47.771 ] 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_nvme_attach_controller", 00:32:47.771 "params": { 00:32:47.771 "name": "nvme0", 00:32:47.771 "trtype": "TCP", 00:32:47.771 "adrfam": "IPv4", 00:32:47.771 "traddr": "127.0.0.1", 00:32:47.771 "trsvcid": "4420", 00:32:47.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.771 "prchk_reftag": false, 00:32:47.771 "prchk_guard": false, 00:32:47.771 "ctrlr_loss_timeout_sec": 0, 00:32:47.771 "reconnect_delay_sec": 0, 00:32:47.771 "fast_io_fail_timeout_sec": 0, 00:32:47.771 "psk": "key0", 00:32:47.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.771 "hdgst": false, 00:32:47.771 "ddgst": false 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_nvme_set_hotplug", 00:32:47.771 "params": { 00:32:47.771 "period_us": 100000, 00:32:47.771 "enable": false 00:32:47.771 } 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "method": "bdev_wait_for_examine" 00:32:47.771 } 00:32:47.771 ] 00:32:47.771 }, 00:32:47.771 { 00:32:47.771 "subsystem": "nbd", 00:32:47.771 "config": [] 00:32:47.771 } 00:32:47.771 ] 00:32:47.771 }' 00:32:48.032 [2024-06-10 12:36:53.409629] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:32:48.032 [2024-06-10 12:36:53.409681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903807 ] 00:32:48.032 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.032 [2024-06-10 12:36:53.483747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.032 [2024-06-10 12:36:53.538102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.292 [2024-06-10 12:36:53.679772] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:48.863 12:36:54 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:48.863 12:36:54 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:48.863 12:36:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:48.863 12:36:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.863 12:36:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:48.863 12:36:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.863 12:36:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:49.124 12:36:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:49.124 12:36:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:49.124 12:36:54 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:49.124 12:36:54 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:49.124 12:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:49.124 12:36:54 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:49.384 12:36:54 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:49.384 12:36:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:49.384 12:36:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VYljM6Ra9X /tmp/tmp.D45k4b7skW 00:32:49.384 12:36:54 keyring_file -- keyring/file.sh@20 -- # killprocess 903807 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 903807 ']' 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@953 -- # kill -0 903807 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 903807 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 903807' 00:32:49.384 killing process with pid 903807 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@968 -- # kill 903807 00:32:49.384 Received shutdown signal, test time was about 1.000000 seconds 00:32:49.384 00:32:49.384 Latency(us) 00:32:49.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.384 =================================================================================================================== 00:32:49.384 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:49.384 12:36:54 keyring_file -- common/autotest_common.sh@973 -- # wait 903807 00:32:49.645 12:36:54 keyring_file -- keyring/file.sh@21 -- # killprocess 902046 00:32:49.645 12:36:54 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 902046 ']' 00:32:49.645 12:36:54 keyring_file -- common/autotest_common.sh@953 -- # kill -0 902046 00:32:49.645 12:36:54 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:49.645 12:36:54 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 902046 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 902046' 00:32:49.645 killing process with pid 902046 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@968 -- # kill 902046 00:32:49.645 [2024-06-10 12:36:55.050614] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:49.645 12:36:55 keyring_file -- common/autotest_common.sh@973 -- # wait 902046 00:32:49.907 00:32:49.907 real 0m10.989s 00:32:49.907 user 0m25.996s 00:32:49.907 sys 0m2.642s 00:32:49.907 12:36:55 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:49.907 12:36:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.907 ************************************ 00:32:49.907 END TEST keyring_file 00:32:49.907 ************************************ 00:32:49.907 12:36:55 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:49.907 12:36:55 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.907 12:36:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:49.907 12:36:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:49.907 12:36:55 -- common/autotest_common.sh@10 -- # set +x 00:32:49.907 ************************************ 00:32:49.907 START TEST keyring_linux 00:32:49.907 ************************************ 00:32:49.907 12:36:55 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.907 * Looking for test storage... 00:32:49.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.907 12:36:55 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.907 12:36:55 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.907 12:36:55 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.907 12:36:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.907 12:36:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.907 12:36:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.907 12:36:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:49.907 12:36:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:49.907 12:36:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:49.907 12:36:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:49.907 12:36:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:50.169 /tmp/:spdk-test:key0 00:32:50.169 12:36:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:50.169 12:36:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:50.169 12:36:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:50.169 /tmp/:spdk-test:key1 00:32:50.169 12:36:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:50.169 12:36:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=904435 00:32:50.169 12:36:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 904435 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 904435 ']' 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:50.169 12:36:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:50.169 [2024-06-10 12:36:55.612845] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:32:50.169 [2024-06-10 12:36:55.612915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904435 ] 00:32:50.169 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.169 [2024-06-10 12:36:55.685472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.169 [2024-06-10 12:36:55.759070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.111 12:36:56 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:51.111 12:36:56 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:51.111 12:36:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:51.111 12:36:56 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.111 12:36:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:51.111 [2024-06-10 12:36:56.402541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.111 null0 00:32:51.112 [2024-06-10 12:36:56.434586] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:51.112 [2024-06-10 12:36:56.435098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.112 12:36:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:51.112 515252712 00:32:51.112 12:36:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:51.112 1017626423 00:32:51.112 12:36:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=904470 00:32:51.112 12:36:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 904470 /var/tmp/bperf.sock 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 904470 ']' 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:51.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:51.112 12:36:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:51.112 12:36:56 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:51.112 [2024-06-10 12:36:56.517329] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:32:51.112 [2024-06-10 12:36:56.517378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904470 ] 00:32:51.112 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.112 [2024-06-10 12:36:56.598041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.112 [2024-06-10 12:36:56.651420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.683 12:36:57 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:51.683 12:36:57 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:51.683 12:36:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:51.683 12:36:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:51.944 12:36:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:51.944 12:36:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:52.204 12:36:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:52.204 12:36:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:52.204 [2024-06-10 12:36:57.749643] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:52.465 nvme0n1 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:52.465 12:36:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:52.465 12:36:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:52.465 12:36:58 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:52.465 12:36:58 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:52.465 12:36:58 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.465 12:36:58 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:52.465 12:36:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@25 -- # sn=515252712 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 515252712 == \5\1\5\2\5\2\7\1\2 ]] 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 515252712 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:52.725 12:36:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:52.725 Running I/O for 1 seconds... 00:32:53.666 00:32:53.666 Latency(us) 00:32:53.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.666 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:53.666 nvme0n1 : 1.01 13443.89 52.52 0.00 0.00 9475.06 2143.57 10321.92 00:32:53.666 =================================================================================================================== 00:32:53.666 Total : 13443.89 52.52 0.00 0.00 9475.06 2143.57 10321.92 00:32:53.666 0 00:32:53.666 12:36:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:53.666 12:36:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.926 12:36:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:53.926 12:36:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:53.926 12:36:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:53.926 12:36:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:53.926 12:36:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:53.926 12:36:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.186 12:36:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.187 12:36:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.187 [2024-06-10 12:36:59.724901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:54.187 [2024-06-10 12:36:59.725669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfc480 (107): Transport endpoint is not connected 00:32:54.187 [2024-06-10 12:36:59.726665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfc480 (9): Bad file descriptor 00:32:54.187 [2024-06-10 12:36:59.727666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.187 [2024-06-10 12:36:59.727675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:54.187 [2024-06-10 12:36:59.727680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.187 request: 00:32:54.187 { 00:32:54.187 "name": "nvme0", 00:32:54.187 "trtype": "tcp", 00:32:54.187 "traddr": "127.0.0.1", 00:32:54.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.187 "adrfam": "ipv4", 00:32:54.187 "trsvcid": "4420", 00:32:54.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.187 "psk": ":spdk-test:key1", 00:32:54.187 "method": "bdev_nvme_attach_controller", 00:32:54.187 "req_id": 1 00:32:54.187 } 00:32:54.187 Got JSON-RPC error response 00:32:54.187 response: 00:32:54.187 { 00:32:54.187 "code": -5, 00:32:54.187 "message": "Input/output error" 00:32:54.187 } 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@33 -- # sn=515252712 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 515252712 00:32:54.187 1 links removed 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@33 -- # sn=1017626423 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1017626423 00:32:54.187 1 links removed 00:32:54.187 12:36:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 904470 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 904470 ']' 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 904470 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:54.187 12:36:59 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 904470 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 904470' 00:32:54.448 killing process with pid 904470 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@968 -- # kill 904470 00:32:54.448 Received shutdown signal, test time was about 1.000000 seconds 00:32:54.448 00:32:54.448 Latency(us) 00:32:54.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.448 =================================================================================================================== 00:32:54.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@973 -- # wait 904470 00:32:54.448 12:36:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 904435 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 904435 ']' 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 904435 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 904435 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 904435' 00:32:54.448 killing process with pid 904435 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@968 -- # kill 904435 00:32:54.448 12:36:59 keyring_linux -- common/autotest_common.sh@973 -- # wait 904435 00:32:54.709 00:32:54.709 real 0m4.841s 00:32:54.709 user 0m8.588s 00:32:54.709 sys 0m1.375s 00:32:54.709 12:37:00 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:54.709 12:37:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:54.709 ************************************ 00:32:54.709 END TEST keyring_linux 00:32:54.709 ************************************ 00:32:54.709 12:37:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:54.709 12:37:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:54.709 12:37:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:54.710 12:37:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:54.710 12:37:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:54.710 12:37:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:54.710 12:37:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:54.710 12:37:00 -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:54.710 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:32:54.710 12:37:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:54.710 12:37:00 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:32:54.710 12:37:00 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:32:54.710 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:33:03.115 INFO: APP EXITING 00:33:03.115 INFO: killing all VMs 00:33:03.115 INFO: killing vhost app 00:33:03.115 INFO: EXIT DONE 00:33:06.416 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:06.416 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:06.416 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:10.623 Cleaning 00:33:10.623 Removing: /var/run/dpdk/spdk0/config 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:10.623 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:10.623 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:10.623 Removing: /var/run/dpdk/spdk1/config 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:10.623 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:10.623 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:10.623 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:10.624 Removing: /var/run/dpdk/spdk2/config 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:10.624 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:10.624 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:10.624 Removing: /var/run/dpdk/spdk3/config 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:10.624 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:10.624 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:10.624 Removing: /var/run/dpdk/spdk4/config 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:10.624 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:10.624 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:10.624 Removing: /dev/shm/bdev_svc_trace.1 00:33:10.624 Removing: /dev/shm/nvmf_trace.0 00:33:10.624 Removing: /dev/shm/spdk_tgt_trace.pid418715 00:33:10.624 Removing: /var/run/dpdk/spdk0 00:33:10.624 Removing: /var/run/dpdk/spdk1 00:33:10.624 Removing: /var/run/dpdk/spdk2 00:33:10.624 Removing: /var/run/dpdk/spdk3 00:33:10.624 Removing: /var/run/dpdk/spdk4 00:33:10.624 Removing: /var/run/dpdk/spdk_pid417164 00:33:10.624 Removing: /var/run/dpdk/spdk_pid418715 00:33:10.624 Removing: /var/run/dpdk/spdk_pid419256 00:33:10.624 Removing: /var/run/dpdk/spdk_pid420587 00:33:10.624 Removing: /var/run/dpdk/spdk_pid420736 00:33:10.624 Removing: /var/run/dpdk/spdk_pid422453 00:33:10.624 Removing: /var/run/dpdk/spdk_pid422567 00:33:10.624 Removing: /var/run/dpdk/spdk_pid422950 00:33:10.624 Removing: /var/run/dpdk/spdk_pid423838 00:33:10.624 Removing: /var/run/dpdk/spdk_pid424594 00:33:10.624 Removing: /var/run/dpdk/spdk_pid424963 00:33:10.624 Removing: /var/run/dpdk/spdk_pid425239 00:33:10.624 Removing: /var/run/dpdk/spdk_pid425543 00:33:10.624 Removing: /var/run/dpdk/spdk_pid425844 00:33:10.624 Removing: /var/run/dpdk/spdk_pid426202 00:33:10.624 Removing: /var/run/dpdk/spdk_pid426552 00:33:10.624 Removing: /var/run/dpdk/spdk_pid426897 00:33:10.624 Removing: /var/run/dpdk/spdk_pid427997 00:33:10.624 Removing: /var/run/dpdk/spdk_pid431368 00:33:10.624 Removing: /var/run/dpdk/spdk_pid431673 00:33:10.624 Removing: /var/run/dpdk/spdk_pid431990 00:33:10.624 Removing: /var/run/dpdk/spdk_pid432314 00:33:10.624 Removing: /var/run/dpdk/spdk_pid432689 00:33:10.624 Removing: /var/run/dpdk/spdk_pid432765 00:33:10.624 Removing: /var/run/dpdk/spdk_pid433363 00:33:10.624 Removing: /var/run/dpdk/spdk_pid433412 00:33:10.624 Removing: /var/run/dpdk/spdk_pid433773 00:33:10.624 Removing: /var/run/dpdk/spdk_pid433959 00:33:10.624 Removing: /var/run/dpdk/spdk_pid434151 00:33:10.624 Removing: /var/run/dpdk/spdk_pid434400 00:33:10.624 Removing: /var/run/dpdk/spdk_pid434923 00:33:10.624 Removing: /var/run/dpdk/spdk_pid435153 00:33:10.624 Removing: /var/run/dpdk/spdk_pid435415 00:33:10.624 Removing: /var/run/dpdk/spdk_pid435718 00:33:10.624 Removing: /var/run/dpdk/spdk_pid435795 00:33:10.624 Removing: /var/run/dpdk/spdk_pid436121 00:33:10.624 Removing: /var/run/dpdk/spdk_pid436333 00:33:10.624 Removing: /var/run/dpdk/spdk_pid436530 00:33:10.624 Removing: /var/run/dpdk/spdk_pid436864 00:33:10.624 Removing: /var/run/dpdk/spdk_pid437211 00:33:10.624 Removing: /var/run/dpdk/spdk_pid437563 00:33:10.624 Removing: /var/run/dpdk/spdk_pid437819 00:33:10.624 Removing: /var/run/dpdk/spdk_pid438001 00:33:10.624 Removing: /var/run/dpdk/spdk_pid438299 00:33:10.624 Removing: /var/run/dpdk/spdk_pid438655 00:33:10.624 Removing: /var/run/dpdk/spdk_pid439004 00:33:10.624 Removing: /var/run/dpdk/spdk_pid439302 00:33:10.624 Removing: /var/run/dpdk/spdk_pid439502 00:33:10.624 Removing: /var/run/dpdk/spdk_pid439745 00:33:10.624 Removing: /var/run/dpdk/spdk_pid440094 00:33:10.624 Removing: /var/run/dpdk/spdk_pid440447 00:33:10.624 Removing: /var/run/dpdk/spdk_pid440798 00:33:10.624 Removing: /var/run/dpdk/spdk_pid441024 00:33:10.624 Removing: /var/run/dpdk/spdk_pid441224 00:33:10.624 Removing: /var/run/dpdk/spdk_pid441541 00:33:10.624 Removing: /var/run/dpdk/spdk_pid441900 00:33:10.624 Removing: /var/run/dpdk/spdk_pid441988 00:33:10.624 Removing: /var/run/dpdk/spdk_pid442378 00:33:10.624 Removing: /var/run/dpdk/spdk_pid447478 00:33:10.624 Removing: /var/run/dpdk/spdk_pid505121 00:33:10.624 Removing: /var/run/dpdk/spdk_pid510833 00:33:10.624 Removing: /var/run/dpdk/spdk_pid523290 00:33:10.624 Removing: /var/run/dpdk/spdk_pid530857 00:33:10.624 Removing: /var/run/dpdk/spdk_pid536230 00:33:10.624 Removing: /var/run/dpdk/spdk_pid536917 00:33:10.624 Removing: /var/run/dpdk/spdk_pid551958 00:33:10.624 Removing: /var/run/dpdk/spdk_pid551962 00:33:10.624 Removing: /var/run/dpdk/spdk_pid552966 00:33:10.624 Removing: /var/run/dpdk/spdk_pid553968 00:33:10.624 Removing: /var/run/dpdk/spdk_pid554976 00:33:10.885 Removing: /var/run/dpdk/spdk_pid555654 00:33:10.885 Removing: /var/run/dpdk/spdk_pid555675 00:33:10.885 Removing: /var/run/dpdk/spdk_pid555995 00:33:10.885 Removing: /var/run/dpdk/spdk_pid556168 00:33:10.885 Removing: /var/run/dpdk/spdk_pid556290 00:33:10.885 Removing: /var/run/dpdk/spdk_pid557327 00:33:10.885 Removing: /var/run/dpdk/spdk_pid558335 00:33:10.885 Removing: /var/run/dpdk/spdk_pid559343 00:33:10.885 Removing: /var/run/dpdk/spdk_pid560013 00:33:10.885 Removing: /var/run/dpdk/spdk_pid560015 00:33:10.885 Removing: /var/run/dpdk/spdk_pid560356 00:33:10.885 Removing: /var/run/dpdk/spdk_pid561747 00:33:10.885 Removing: /var/run/dpdk/spdk_pid562932 00:33:10.885 Removing: /var/run/dpdk/spdk_pid573560 00:33:10.885 Removing: /var/run/dpdk/spdk_pid574040 00:33:10.885 Removing: /var/run/dpdk/spdk_pid580170 00:33:10.885 Removing: /var/run/dpdk/spdk_pid587584 00:33:10.885 Removing: /var/run/dpdk/spdk_pid590664 00:33:10.885 Removing: /var/run/dpdk/spdk_pid603847 00:33:10.885 Removing: /var/run/dpdk/spdk_pid615566 00:33:10.885 Removing: /var/run/dpdk/spdk_pid617676 00:33:10.885 Removing: /var/run/dpdk/spdk_pid618849 00:33:10.885 Removing: /var/run/dpdk/spdk_pid641135 00:33:10.885 Removing: /var/run/dpdk/spdk_pid646295 00:33:10.885 Removing: /var/run/dpdk/spdk_pid677605 00:33:10.885 Removing: /var/run/dpdk/spdk_pid683928 00:33:10.885 Removing: /var/run/dpdk/spdk_pid685849 00:33:10.885 Removing: /var/run/dpdk/spdk_pid687949 00:33:10.885 Removing: /var/run/dpdk/spdk_pid688283 00:33:10.885 Removing: /var/run/dpdk/spdk_pid688440 00:33:10.885 Removing: /var/run/dpdk/spdk_pid688645 00:33:10.885 Removing: /var/run/dpdk/spdk_pid689366 00:33:10.885 Removing: /var/run/dpdk/spdk_pid691372 00:33:10.885 Removing: /var/run/dpdk/spdk_pid692452 00:33:10.885 Removing: /var/run/dpdk/spdk_pid693155 00:33:10.885 Removing: /var/run/dpdk/spdk_pid695646 00:33:10.885 Removing: /var/run/dpdk/spdk_pid696396 00:33:10.885 Removing: /var/run/dpdk/spdk_pid697274 00:33:10.885 Removing: /var/run/dpdk/spdk_pid702687 00:33:10.885 Removing: /var/run/dpdk/spdk_pid715654 00:33:10.885 Removing: /var/run/dpdk/spdk_pid720468 00:33:10.885 Removing: /var/run/dpdk/spdk_pid728648 00:33:10.885 Removing: /var/run/dpdk/spdk_pid730398 00:33:10.885 Removing: /var/run/dpdk/spdk_pid731937 00:33:10.885 Removing: /var/run/dpdk/spdk_pid737697 00:33:10.885 Removing: /var/run/dpdk/spdk_pid743085 00:33:10.885 Removing: /var/run/dpdk/spdk_pid753182 00:33:10.885 Removing: /var/run/dpdk/spdk_pid753186 00:33:10.885 Removing: /var/run/dpdk/spdk_pid758700 00:33:10.885 Removing: /var/run/dpdk/spdk_pid758915 00:33:10.885 Removing: /var/run/dpdk/spdk_pid759247 00:33:10.885 Removing: /var/run/dpdk/spdk_pid759796 00:33:10.885 Removing: /var/run/dpdk/spdk_pid759915 00:33:10.885 Removing: /var/run/dpdk/spdk_pid765943 00:33:10.885 Removing: /var/run/dpdk/spdk_pid766471 00:33:10.885 Removing: /var/run/dpdk/spdk_pid772312 00:33:10.885 Removing: /var/run/dpdk/spdk_pid775667 00:33:10.886 Removing: /var/run/dpdk/spdk_pid782706 00:33:10.886 Removing: /var/run/dpdk/spdk_pid790171 00:33:10.886 Removing: /var/run/dpdk/spdk_pid800440 00:33:10.886 Removing: /var/run/dpdk/spdk_pid809566 00:33:10.886 Removing: /var/run/dpdk/spdk_pid809583 00:33:10.886 Removing: /var/run/dpdk/spdk_pid833508 00:33:10.886 Removing: /var/run/dpdk/spdk_pid834188 00:33:11.147 Removing: /var/run/dpdk/spdk_pid834874 00:33:11.147 Removing: /var/run/dpdk/spdk_pid835612 00:33:11.147 Removing: /var/run/dpdk/spdk_pid836623 00:33:11.147 Removing: /var/run/dpdk/spdk_pid837311 00:33:11.147 Removing: /var/run/dpdk/spdk_pid838195 00:33:11.147 Removing: /var/run/dpdk/spdk_pid839068 00:33:11.147 Removing: /var/run/dpdk/spdk_pid845071 00:33:11.147 Removing: /var/run/dpdk/spdk_pid845282 00:33:11.147 Removing: /var/run/dpdk/spdk_pid852983 00:33:11.147 Removing: /var/run/dpdk/spdk_pid853359 00:33:11.147 Removing: /var/run/dpdk/spdk_pid855870 00:33:11.147 Removing: /var/run/dpdk/spdk_pid863662 00:33:11.147 Removing: /var/run/dpdk/spdk_pid863736 00:33:11.147 Removing: /var/run/dpdk/spdk_pid870359 00:33:11.147 Removing: /var/run/dpdk/spdk_pid872852 00:33:11.147 Removing: /var/run/dpdk/spdk_pid875060 00:33:11.147 Removing: /var/run/dpdk/spdk_pid876546 00:33:11.147 Removing: /var/run/dpdk/spdk_pid878947 00:33:11.147 Removing: /var/run/dpdk/spdk_pid880273 00:33:11.147 Removing: /var/run/dpdk/spdk_pid891233 00:33:11.147 Removing: /var/run/dpdk/spdk_pid891857 00:33:11.147 Removing: /var/run/dpdk/spdk_pid892881 00:33:11.147 Removing: /var/run/dpdk/spdk_pid895779 00:33:11.147 Removing: /var/run/dpdk/spdk_pid896437 00:33:11.147 Removing: /var/run/dpdk/spdk_pid897105 00:33:11.147 Removing: /var/run/dpdk/spdk_pid902046 00:33:11.147 Removing: /var/run/dpdk/spdk_pid902228 00:33:11.147 Removing: /var/run/dpdk/spdk_pid903807 00:33:11.147 Removing: /var/run/dpdk/spdk_pid904435 00:33:11.147 Removing: /var/run/dpdk/spdk_pid904470 00:33:11.147 Clean 00:33:11.147 12:37:16 -- common/autotest_common.sh@1450 -- # return 0 00:33:11.147 12:37:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:11.147 12:37:16 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:11.147 12:37:16 -- common/autotest_common.sh@10 -- # set +x 00:33:11.147 12:37:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:11.147 12:37:16 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:11.147 12:37:16 -- common/autotest_common.sh@10 -- # set +x 00:33:11.408 12:37:16 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:11.408 12:37:16 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:11.408 12:37:16 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:11.408 12:37:16 -- spdk/autotest.sh@391 -- # hash lcov 00:33:11.408 12:37:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:11.408 12:37:16 -- spdk/autotest.sh@393 -- # hostname 00:33:11.408 12:37:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:11.408 geninfo: WARNING: invalid characters removed from testname! 00:33:38.052 12:37:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:38.313 12:37:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:40.227 12:37:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:41.621 12:37:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.007 12:37:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:44.920 12:37:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.305 12:37:51 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:46.305 12:37:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.305 12:37:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:46.305 12:37:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.305 12:37:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.305 12:37:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.305 12:37:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.305 12:37:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.305 12:37:51 -- paths/export.sh@5 -- $ export PATH 00:33:46.305 12:37:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.305 12:37:51 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:46.305 12:37:51 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:46.305 12:37:51 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718015871.XXXXXX 00:33:46.305 12:37:51 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718015871.bFXcTK 00:33:46.305 12:37:51 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:46.305 12:37:51 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:46.305 12:37:51 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:46.305 12:37:51 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:46.305 12:37:51 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:46.305 12:37:51 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:46.305 12:37:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:46.305 12:37:51 -- common/autotest_common.sh@10 -- $ set +x 00:33:46.305 12:37:51 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:46.305 12:37:51 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:46.305 12:37:51 -- pm/common@17 -- $ local monitor 00:33:46.305 12:37:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.305 12:37:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.305 12:37:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.305 12:37:51 -- pm/common@21 -- $ date +%s 00:33:46.305 12:37:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.305 12:37:51 -- pm/common@21 -- $ date +%s 00:33:46.305 12:37:51 -- pm/common@25 -- $ sleep 1 00:33:46.305 12:37:51 -- pm/common@21 -- $ date +%s 00:33:46.305 12:37:51 -- pm/common@21 -- $ date +%s 00:33:46.305 12:37:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718015871 00:33:46.305 12:37:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718015871 00:33:46.305 12:37:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718015871 00:33:46.305 12:37:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718015871 00:33:46.305 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718015871_collect-vmstat.pm.log 00:33:46.305 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718015871_collect-cpu-load.pm.log 00:33:46.305 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718015871_collect-cpu-temp.pm.log 00:33:46.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718015871_collect-bmc-pm.bmc.pm.log 00:33:47.509 12:37:52 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:47.509 12:37:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:47.509 12:37:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.509 12:37:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:47.509 12:37:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:47.509 12:37:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:47.509 12:37:52 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:47.509 12:37:52 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:47.509 12:37:52 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:47.509 12:37:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:47.509 12:37:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:47.509 12:37:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:47.509 12:37:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:47.509 12:37:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:47.509 12:37:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:47.509 12:37:52 -- pm/common@44 -- $ pid=917194 00:33:47.509 12:37:52 -- pm/common@50 -- $ kill -TERM 917194 00:33:47.509 12:37:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:47.509 12:37:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:47.509 12:37:52 -- pm/common@44 -- $ pid=917195 00:33:47.509 12:37:52 -- pm/common@50 -- $ kill -TERM 917195 00:33:47.509 12:37:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:47.509 12:37:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:47.509 12:37:52 -- pm/common@44 -- $ pid=917198 00:33:47.509 12:37:52 -- pm/common@50 -- $ kill -TERM 917198 00:33:47.509 12:37:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:47.509 12:37:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:47.509 12:37:52 -- pm/common@44 -- $ pid=917222 00:33:47.509 12:37:52 -- pm/common@50 -- $ sudo -E kill -TERM 917222 00:33:47.509 + [[ -n 294852 ]] 00:33:47.509 + sudo kill 294852 00:33:47.520 [Pipeline] } 00:33:47.539 [Pipeline] // stage 00:33:47.546 [Pipeline] } 00:33:47.569 [Pipeline] // timeout 00:33:47.577 [Pipeline] } 00:33:47.596 [Pipeline] // catchError 00:33:47.602 [Pipeline] } 00:33:47.621 [Pipeline] // wrap 00:33:47.629 [Pipeline] } 00:33:47.648 [Pipeline] // catchError 00:33:47.658 [Pipeline] stage 00:33:47.660 [Pipeline] { (Epilogue) 00:33:47.675 [Pipeline] catchError 00:33:47.677 [Pipeline] { 00:33:47.692 [Pipeline] echo 00:33:47.693 Cleanup processes 00:33:47.699 [Pipeline] sh 00:33:47.986 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.986 917300 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:47.986 917746 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:48.003 [Pipeline] sh 00:33:48.293 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:48.293 ++ grep -v 'sudo pgrep' 00:33:48.293 ++ awk '{print $1}' 00:33:48.293 + sudo kill -9 917300 00:33:48.306 [Pipeline] sh 00:33:48.592 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:00.878 [Pipeline] sh 00:34:01.165 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:01.165 Artifacts sizes are good 00:34:01.179 [Pipeline] archiveArtifacts 00:34:01.186 Archiving artifacts 00:34:01.374 [Pipeline] sh 00:34:01.658 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:01.673 [Pipeline] cleanWs 00:34:01.683 [WS-CLEANUP] Deleting project workspace... 00:34:01.683 [WS-CLEANUP] Deferred wipeout is used... 00:34:01.690 [WS-CLEANUP] done 00:34:01.692 [Pipeline] } 00:34:01.711 [Pipeline] // catchError 00:34:01.724 [Pipeline] sh 00:34:02.011 + logger -p user.info -t JENKINS-CI 00:34:02.022 [Pipeline] } 00:34:02.039 [Pipeline] // stage 00:34:02.046 [Pipeline] } 00:34:02.064 [Pipeline] // node 00:34:02.068 [Pipeline] End of Pipeline 00:34:02.105 Finished: SUCCESS